- Oct 2024
-
Local file Local file
-
Denning, S. (2015). How to make the whole organization Agile. Strategy and Leadership, 43(6),10–17. https://doi.org/10.1108/SL-09-2015-0074
-
-
thethink.institute thethink.institute
-
Guess what: the Bible teaches that there is such a standard. It is God Himself.
Okay but once again the Bible relies on God. But if you aren't convinced in what the Bible says then how can you believe in God...?
-
Maybe you’ve experienced this. You’ve been reading the Bible, and you’ve realized that it was revealing to you certain qualities about yourself—it was convicting you about a sin in your life, or it was encouraging you with a promise from God that was exactly what you needed
Not only does the bible do this, but okay. This is a valid point
-
God’s word
That part is true... But if the bible is "God's word," but it's the bible that says God exists, then is it not just the same source citing itself?
-
-
www.americanyawp.com www.americanyawp.com
-
During this same time, The Sun commanded that Montezuma and Itzcohuatzin, the military chief of Tlatelolco, be made prisoners. The Spaniards hanged a chief from Acolhuacan named Nezahualquentzin. They also murdered the king of Nauhtla, Cohualpopocatzin, by wounding him with arrows and then burning him alive.
So there had already been bloodshed caused by the Spanish? Yet the Aztecs still hosted them with hospitality... could this be due to the fact that Montzuma & co. thought they were gods?
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to be creating an ECS cluster with the Fargate cluster mode and using the container of CATS container that we created together earlier in this section of the course, you're going to deploy this container into your Fargate cluster.
So you're going to get some practical experience of how to deploy a real container into a Fargate cluster.
Now you won't need any cloud formation templates applied to perform this demo because we're going to use the default VPC.
All that you'll need is to be logged in as the IAM admin user inside the management account of the organization and just make sure that you're in the Northern Virginia region.
Once you've confirmed that then just click in Find Services and type ECS and then click to move to the ECS console.
Once you're at the ECS console, step one is to create a Fargate cluster.
So that's the cluster that our container is going to run inside.
So click on clusters, then create cluster.
You'll need to give the cluster a name.
You can put anything you want here, but I recommend using the same as me and I'll be putting all the CATS.
Now Fargate mode requires a VPC.
I'm going to be suggesting that we use the default VPC because that's already configured, remember, to give public IP addresses to anything deployed into the public subnets.
So just to keep it simple and avoid any extra configuration, we'll use the default VPC.
Now it should automatically select all of the subnets within the default VPC, in my case all six.
If yours doesn't, just make sure you select all of the available subnets from this dropdown, but it should do this by default.
Then scroll down and just note how AWS Fargate is already selected and that's the default.
If you wanted to, you could check to use Amazon EC2 instances or external instances using ECS anywhere, but for this demo, we won't be doing that.
Instead, we'll leave everything else as default, scroll down to the bottom and click create.
If this is the first time you're doing this in an AWS account, it's possible that you'll get the error that's shown on screen now.
If you do get this error, then what I would suggest is to wait a few minutes, then go back to the main ECS console, go to cluster again and then create the all the cats cluster again.
So follow exactly the same steps, call the cluster all the cats, make sure that the animals for live default VPC is selected and all those subnets are present, and then click on create.
You should find that the second time that you run this creation process, it works okay.
Now this generally happens because there's an approval process that needs to happen behind the scenes.
So if this is the first time that you're using ECS within this AWS account, then you might get this error.
It's nothing to worry about, just rerun the process and it should create fine the second time.
Once you've followed that process through again, or if it works the first time, then just go ahead and click on the all the cats cluster.
So this is the Fargate based cluster.
It's in an active state, so we're good to deploy things into this cluster.
And we can see that we've got no active services.
If I click on tasks, we can see we've got no active tasks.
There's a tab here, metrics where you can see cloud watch metrics about this cluster.
And again, because this is newly created and it doesn't have any activity, all of this is going to be blank.
For now, that's fine.
What we need to do for this demonstration is create a task definition that will deploy our container, our container of cats container into this Fargate cluster.
To do that, click on task definitions and create a new task definition.
You'll need to pick a name for your task definition.
Go ahead and put container of cats.
And then inside this task definition, the first thing to do set the details of the container for this task.
So under container details under name, go ahead and put container of cats web.
So this is going to be the web container for the container of cats task.
Then next to the name under image URI, you need to point this at the docker image that's going to be used for this container.
So I'm going to go ahead and paste in the URI for my docker image.
So this is the docker image that I created earlier in the course within the EC2 docker demo.
You might have also created your own container image.
You can feel free to use my container image or you can use yours.
If you want to keep things simple, you should go ahead and use mine.
Yours should be the same anyway.
Now just to be careful, this isn't a URL.
This is a URI to point at my docker image.
So it consists of three parts.
First we have docker.io, which is the docker hub.
Then we have my username, so acantral.
And then we have the repository name, which is container of cats.
So if you want to use your own docker image, you need to change both the username and the repository name.
Again, to keep things simple, feel free to use my docker image.
Then scrolling down, we need to make sure that the port mappings are correct.
It should show what's on screen now, so container port 80, TCP.
And then the port name should be the same or similar to what's on screen now.
Don't worry if it's slightly different and the application protocol should be HTTP.
This is controlling the port mapping from the container through to the Fargate IP address.
And I'll talk more about this IP address later on in this demo.
Everything else looks good, so scroll down to the bottom and click on next.
We need to specify some environment details.
So under operating system/architecture, it needs to be linux/x86_64.
Under task size for memory, go ahead and select 1GB and then under CPU, 0.5 vCPU.
That should be enough resources for this simple docker application.
Scroll down and under monitoring and logging, uncheck use log collection.
We won't be needing it for this demo lesson.
That's everything we need to do.
Go ahead and click on next.
This is just an overview of everything that we've configured, so you can scroll down to the bottom and click on create.
And at this point, the task definition has been created successfully.
And this is where you can see all of the details of the task definition.
If you want to see the raw JSON for the task definition itself, you don't need this for the exam, but this is actually what a task definition looks like.
So it contains all of this different information.
What it has got is one or more container definitions.
So this is just JSON.
This is a list of container definitions.
We've only got the one.
And if you're looking at this, you can see where we set the port mapping.
So we're mapping port 80.
You can see where it's got the image URL, which is where it pulls the docker image from.
This is exactly what a normal task and container definition look like.
They can be significantly more complex, but this format is consistent across all task definitions.
Okay, so now it's time to launch a task.
It's time to take the container and task definitions that we've defined and actually run up a container inside ECS using those definitions.
So to do that, click on clusters and then select the all the cats cluster.
Click on tasks and then click on run a new task.
Now, first we need to pick the compute options and we're going to select launch type.
So check that box.
If appropriate for the certification that you're studying for, I'll be talking about the differences between these two in a different lesson.
Once you've clicked on launch type, make sure Fargate is selected in the launch type drop down and latest is selected under platform version.
Then scroll down and we're going to be creating a task.
So make sure that task is selected.
Scroll down again and under family, make sure container of cats is selected.
And then under revision, select latest.
We want to make sure the latest version is used and we'll leave desired tasks at one and task group blank.
Scroll down and expand networking.
Make sure the default VPC is selected and then make sure again that all of the subnets inside the default VPC are present under subnets.
The default is that all of them should be in my case six.
Now the way that this task is going to work is that when the task is run within Fargate, an elastic network interface is going to be created within the default VPC.
And that elastic network interface is going to have a security group.
So we need to make sure that the security group is appropriate and allows us to access our containerized application.
So check the box to say create a new security group and then for security group name and description, use container of cats -sg.
We need to make sure that the rule on this security group is appropriate.
So under type select HTTP and then under source change this to anywhere.
And this will mean that anyone can access this containerized application.
Finally make sure that public IP is turned on.
This is really important because this is how we'll access our containerized application.
Everything else looks good.
We can scroll down to the bottom and click on create.
Now give that a couple of seconds.
It should initially show last status.
So the last status should be set to provisioning and the desired state should be set to running.
So we need to wait for this task provisioning to complete.
So just keep hitting refresh.
You'll see it first change into pending.
Now at this point we need this task to be in a running state before we can continue.
So go ahead and pause the video and wait for both of these states.
So last status and desired status both of those need to be running before we continue.
So pause the video, wait for both of those to change and then once they have you can resume and will continue.
After another refresh the last status should now be running and in green and the desired state should also be running.
So at that point we're good to go.
We can click on the task link below.
We can scroll down and our task has been allocated a private IP version for address in the default VPC and also a public IP version for address also in the default VPC.
So if we copy this public IP into our clipboard and then open a new tab and browse to this IP we'll see our very corporate professional web application.
If it fits, I sits in a container in a container.
So we've taken a Docker image that we created earlier in this section of the course.
We've created a Fargate cluster, created a task definition with a container definition inside and deployed our container image as a container to this Fargate cluster.
So it's a very simple example, but again this scales.
So you could deploy Docker containers which are a lot more complex in what functionality they offer.
In this case it's just an Apache web server loading up a web page but we could deploy any type of web application using the same steps that you've performed in this demo lesson.
So congratulations, you've learned all of the theory that you'll need for the exam and you've taken the steps to implement this theory in practice by deploying a Docker image as a container on an ECS Fargate cluster.
So great job.
At this point all that remains is to tidy up.
So go back to the AWS console.
Just stop this container.
Click on stop.
Click on task definitions and then go into this task definition.
Select this.
Click on actions, deregister and then click on deregister.
Click back on task definitions and make sure there's no results there.
That's good.
Click on clusters.
Click on all the cats.
Delete the cluster.
You'll need to type delete space all the cats and then click on delete to confirm.
And at that point the Fargate cluster has been deleted.
The running container has been stopped.
The task definitions been deleted and our account is back in the same state as when we started.
So at this point you've completed the demo.
You've done great and you've implemented some pretty complex theory.
So you should already have a head start on any exam questions which involve ECS.
We're going to be using ECS a lot more as we move through the course and we're going to be using it in some of the Animals for Life demos as we implement progressively more complex architectures later on in the course.
For now I just wanted to give you the basics but you've done really well if you've implemented this successfully without any issues.
So at this point go ahead, complete this video and when you're ready join me in the next.
-
-
www.r4photobiology.info www.r4photobiology.info
-
vary during the course of the day
Since not all plants show this diurnal time course I would write the sentence a bit different, putting the more important part in front: The optical properties of the epidermis vary strongly in response to exposure to blue light and UV radiation and may also vary even during the day (Barnes et al. 2015) and during the season (Solanki et al. 2019, Pescheck and Bilger 2019).
-
than was thought 12 years
... than was thought 12 years AGO? I would not refer in this way to the previous edition. Suggestion: In recent years, the importance of the role of time and timing in responses to UV radiation became more obvious. However, the following text does not refer to the time factor, but rather to the interaction of the various photoreceptors.
-
signalling
... signalling in the shade avoidance response
-
, which seems contradictory unless the photobiont “talks” to the mycobiont or the mycobiont senses UV-B by an alternative mechanism. Is there something new known about this?
Also cyanobacteria can sense UV-B radiation. I'm quite sure that we have not yet found all possible UV-B receptors. I think we need to do some more literature search and write something about the potential existrance of other UV-B receptors.
-
An important source of ROS is leakage from electron transport in the light reactions of photosynthesis, thus sharing the wavelengths. However, formation of ROS is not limited to these wavelengths.
Since we don't talk about the consequences of light reception thorugh the other receptors, I would not include the generation of ROS by light reception here. There are two more figures for signal transduction below. Should we include another one on ROS? Since this is a somewhat uncertain topic, I would not do that but rather add a little paragraph (at the time of writing this I have not yet read the whole chapter).
-
-
bodiesandstructures.org bodiesandstructures.org
-
Shortly before embarking on a mission to attack and control indigenous people near Hualian in 1915
For Japanese bidding? Who were these residents?
-
-
www.econometrics-with-r.org www.econometrics-with-r.org
-
Z1i,…,ZmiZ1i,…,ZmiZ_{1i},\dots,Z_{mi} are mmm instrumental variables
There are no Z variables in the equation given above
-
-
ageoftransformation.org ageoftransformation.org
-
Culture as the ‘genetic code’ of the next leap
for - article - The End of Scarcity? From ‘Polycrisis’ to Planetary Phase Shift - Nafeez Ahmed - gene-culture coevolution - adjacency - indyweb dev - individual / collective evolutionary learning - provenance - tracing the evolution of ideas - gene-culture coevolution
adjacency - between - indyweb dev - individual / collective evolutionary learning - provenance - tracing the evolution of ideas - gene-culture coevolution - adjacency relationship - As DNA and epigenetics plays the role of transmitting biological adaptations, language and symmathesy play the role of transmitting cultural adaptations
Tags
- adjacency - indyweb dev - individual / collective evolutionary learning - provenance - tracing the evolution of ideas - gene-culture coevolution
- gene-culture coevolution - Nafeez Ahmed
- gene-culture coevolution
- indyweb dev - individual / collective evolutionary learning - provenance - tracing the evolution of ideas
- article - The End of Scarcity? From ‘Polycrisis’ to Planetary Phase Shift - Nafeez Ahmed
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
quietly cleaning a quiet deluxe by [[Just My Typewriter]]
Cleaning the case, exterior and some of interior of a Royal Quiet De Luxe typewriter. She does a somewhat minimal job here.
She could have disassembled a bit more and done a better job with a toothbrush and mineral spirits on the inside.
Not a horrible recommendation for a beginner, but could have gone further and been a bit more comprehensive.
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
Royal Quiet De Luxe Typewriter Adjustment Print Quality Height Balance On-Feet Shift Motion by [[Phoenix Typewriter]]
He made sure the carriage isn't out of alignment which can cause on feet issues as well.
Adjust the basket stops higher or lower as necessary. Try 1/2 to full turn and test each
The adjustment points are between the body and the carriage about an inch inside the body shell.
Do upper case first. The first set of screws/nuts just next to the outside of the typewriter are for lower case and the second set just inside of those are for upper case.
Turning the adjustment screws clockwise should push the carriage stops down just a bit.
Some good characters to check are H, h, p, y, and 8.
-
When doing type alignment, Duane Jensen was taught to use an old/used ribbon instead of a new, wet/dark ribbon for better performance in testing. New ribbons don't show the differences as well.
He's noticed that ribbon from Around the Office are dreadful.
-
-
medium.com medium.com
-
Every day is related to the week it belongs to
- makes my weekly reviews a breeze
-
my BethOS homepage is built off the same idea
-
If something is not in front of me, I will forget it exists
the information I have bothered to save and organise will be unused.
-
-
docdrop.org docdrop.orgv15n32
-
Even when cases do not eventually set-tle, they are often not appealed. Only about 0.026% to 0.027% ofthe cases filed in California’s trial courts result in an appellate dispo-sition.” So when I say the “daily grist” I really do mean trial-courtproceedings, where we are fast and furious, and sometimes thought-less.
Surprising how low the percentage of "cases filed in California's trial courts result in an appellate disposition" are. Personally I thought it was a low number but something like 2-3%, not in the hundredths of a decimal. Given how low the percentage is it helps me get another reason why casebooks focus so much appellate rather than trial court designs. If a case is heard in appellate court that automatically gives it some importance in the legal world.
-
It may be expedient to latch onto the similarity of words andso invoke an opinion; but that attachment to the surface of the textcan lead one astray.
Sounds very similar to the phrase "when you're a hammer, everything looks like a nail." With very little understanding of the law these unimportant surface-level details are viewed as important, these "post-literate" individuals (as the paper calls them) trying to make these unrelated cases connect to what they are researching. I presume that only through a good understanding of how to properly do legal research that this issue is resolved.
-
-
www.youtube.com www.youtube.com
-
Removing Feet from Royal Quiet DeLuxe Typewriter. by [[DC Types]]
Not what I was hoping for in terms of removing the screws holding the feet in.
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
Ryan Holiday says that our society struggles with accepting that we owe things to other people...
This reminds me of Simone Weil's notion of "no rights, only responsibilities"... A right by itself has no power, only obligation has. A right is an obligation toward us fulfilled. Only other people have rights, and we have obligations.
Getting into this frame of mind allows one to live a far more righteous and fulfilled as well as calm life. Once you acknowledge that you have no rights, you can not cling to them, and thus you don't view things as unfair to you.
-
"The Stoic Practice is a Dialogue With The Self" -- Ryan Holiday (~7:58)
I think this is also true for Zettelkasten. You write for yourself. Only you need to understand your notes, nobody else.
-
Stoicism is about taking the thorn out of your own eye before throwing stones at others.
( ~5:58)
-
"You get surprised even by your own notes."
Yes, that's exactly what Niklas Luhmann mentioned as the prerequisite for effective communication (with a Zettelkasten).
-
"You can see I have quite a lot notes I have to make."
This is a difference in mentality between Ryan Holiday and me (as well as Muhammed Ali Kilic)
@M.AKilic50
Our mentality (inspired by GTD and other standard productivity stuff, mostly Flow) is to avoid creating homework.
You don't HAVE to make notes on something. You select what you deem valuable and are interested in working with at the moment.
Because of the marginal gains effect I wrote about earlier, it doesn't matter if you don't make a lot of notes. Besides, you can always return later--especially with a proper bib card and potentially a custom index/ToC for a book.
A Zettelkasten is the lazy man's path to excellence.
(this is an ironic statement of mine because a Zettelkasten asks a lot of work over time. However, it doesn't have to be on a day to day basis. Plus you work only on what you want, hence it doesn't require that much discipline.)
-
"If you do one or two positive contributions a day, it adds up." - Ryan Holiday
Perhaps this is the essence of both Zettelkasten and Commonplace books; Marginal Gains.
Exponentional Increase over time. Upon first glance, it seems linear (1+1 = 2)... However, the formula is different because, at least in Zettelkasten, a new note means N new possible connections as this new note can virtually be connected to all other notes. In a Zettelkasten this is explicit, in a commonplace book connections are implicit.
Tags
- Invincibility
- Muhammed Ali Kilic
- Mutual Surprise
- Rights
- Ryan Holiday
- Philosophy
- Niklas Luhmann
- Scholasticism
- Criticism
- Framing
- Bible
- Zettelkasten
- Justice
- GTD
- Criticizing Fairly
- Stoicism
- Discipline
- Mindset
- Simone Weil
- Communication
- Christianity
- Research
- The Need for Roots
- Obligations
- Productivity
- Commonplace Books
- Analytical Reading
- Top 1-Percent
- Flow
- Marginal Gains
- Intellectualism
- Homework Mentality
- Reading
Annotators
URL
-
-
writingball.blogspot.com writingball.blogspot.com
-
selectricrescue.org selectricrescue.org
-
David Hayden <br /> Austin Selectric Rescue<br /> https://selectricrescue.org/
Custom type elements for the IBM Selectric
ᔥ[[Joe Van Cleave]] in New Selectric Type Elements<br /> (accessed:: 2024-10-19 11:42:15)
-
-
www.youtube.com www.youtube.com
-
Engaging in a Zettelkasten/Commonplace book in this way is equal to inherent spaced repetition and recall perhaps?
Especially if you allow some time of rumination... Read book, wait a few days to a few weeks before processing it. The book's contents remain in the back of your mind.
Then when processing you get engaged with the substance again and therefore interrupt the ebbinghaus curve.
-
Perhaps I need to argue more with the authors and the content, as Adler & van Doren also recommend.
This might be a limitation in (the way I do) Zettelkasten. Because I am not writing in the margins and not engage in "tearing up" the book, I am less inclined to argue against/with the work.
Maybe I need to do this more using bib-card. Further thought on implementation necessary...
Perhaps a different reason is that I like to get through most books quickly rather than slowly. Sometimes I do the arguing afterward, within my ZK.
I need to reflect on this at some point (in the near future) and optimize my processes.
-
-
www.google.co.uk www.google.co.uk
-
Using the daily note feature of the apps I use every day is a great way to do this. But of the three apps I use religiously, Logseq, Capacities, ...
to:
-
-
millercenter.org millercenter.org
-
I can assure you that it is safer to keep your money in a reopened bank than under the mattress.
This is true due to the risk of getting robbed.
-
It is possible that when the banks resume a very few people who have not recovered from their fear may again begin withdrawals.
It is a good idea to rebuild trust within the community.
-
As a result we start tomorrow, Monday, with the opening of banks in the twelve Federal Reserve Bank cities—those banks which on first examination by the Treasury have already been found to be all right.
Its smart to play into what people want.
-
Because of undermined confidence on the part of the public, there was a general rush by a large portion of our population to turn bank deposits into currency or gold. A rush so great that the soundest banks could not get enough currency to meet the demand.
This makes sense, as people were in a big panic then.
-
-
reactormag.com reactormag.com
-
Shelley’s frown had deepened. Elliott was being cruel. He knew he was. But he was also telling the truth, and Tatiana needed to hear it.
This was after Elliot had laid into Tatiana about using the dragon for show and not for reasons that may somewhat be valuable for her in the long run.
-
His stomach sank. Tatiana was a royal pain in the ass, and her cruelty to the dragon was unfathomable, but in this case, at least, she wasn’t crazy.
This is a way that Elliot shows even though Tatiana can be worrisome he still felt some type of empathy for her in this situation.
-
-
millercenter.org millercenter.org
-
Through this program of action we address ourselves to putting our own national house in order and making income balance outgo. Our international trade relations, though vastly important, are in point of time and necessity secondary to the establishment of a sound national economy.
I think it's very smart to focus on rebuilding the countrys economy.
-
Happiness lies not in the mere possession of money; it lies in the joy of achievement, in the thrill of creative effort
I think this is only semi true, as another form of happiness is due to the relief having money brings.
-
In such a spirit on my part and on yours we face our common difficulties. They concern, thank God, only material things. Values have shrunken to fantastic levels; taxes have risen; our ability to pay has fallen; government of all kinds is faced by serious curtailment of income; the means of exchange are frozen in the currents of trade; the withered leaves of industrial enterprise lie on every side; farmers find no markets for their produce; the savings of many years in thousands of families are gone.
I find it interesting that hes making it out, to seem like a problem he too faces.
-
This is preeminently the time to speak the truth, the whole truth, frankly and boldly.
I like the fact his trying to be honest.
-
-
pressbooks.pub pressbooks.pub
-
Paley argues that organisms are analogous to human-created artifacts in that they involve a complex arrangement of parts that serve some useful function, where even slight alterations in the complex arrangement would mean that the useful function was no longer served
How many things had to go perfectly for me to exist in the flesh?
-
-
mlpp.pressbooks.pub mlpp.pressbooks.pub
-
Survivors of the Great Depression and their children the “baby boomers” would not quickly forget the hard times or the fact that government had helped end them. Historians debate when the New Deal ended. Some identify the Fair Labor Standards Act of 1938 as the last major New Deal legislation.
It's interesting to see the major effect the Great Depression had on America.
-
Southern farmers earned on average $183 per year at a time when farmers on the West Coast made more than four times that. Worse, they were producing cotton and corn, crops that paid little while depleting the soil.
It's crazy the difference in salary between farmers on the West Coast and farmers in the South.
-
Hoover had entered office with widespread popular support, but by the end of 1929 the economic collapse had overwhelmed his presidency. Hoover and his advisors assumed, and then desperately hoped, that the sharp economic decline was just a temporary downturn; part of the inevitable boom-bust cycles that stretched back through America’s commercial history.
I think it's interesting that his presidency had such a devastating effect.
-
Despite serious problems in the industrial and agricultural economies, most Americans in 1929 and 1930 believed the nation would bounce back quickly. President Herbert Hoover reassured an audience in 1930 that “the depression is over.” But the president was not simply guilty of false optimism. Hoover had made many mistakes. During his 1928 election campaign, he had promoted higher tariffs to encourage consumption of U.S.-produced products and to protect American farmers from foreign competition. Spurred by the ongoing agricultural depression, Hoover signed the highest tariff in American history, the Smoot-Hawley Tariff of 1930, just as global markets began to crumble. Other countries retaliated and tariff walls rose across the globe. Between 1929 and 1932, international trade dropped from $36 billion to only $12 billion. American exports fell by 78%.
I found this passage really shocking. It’s surprising that many people thought the economy would bounce back quickly when things were so bad. Hoover saying “the depression is over” feels almost unreal. The Smoot-Hawley Tariff made things worse, causing trade to drop a lot. This shows how quickly hope can turn into trouble, especially with bad decisions.
-
Although the crash stunned the nation, it exposed deeper, underlying problems with the American economy in the 1920s. The stock market’s rise did not really represent the health of the overall economy, and the overwhelming majority of Americans had no personal stake in Wall Street. The market’s collapse, no matter how dramatic, did not by itself destroy the American economy. Instead, the crash exposed factors such as rising inequality, declining demand, rural collapse, overextended investors, and a bursting speculative bubble that all combined to plunge the nation into the Great Depression. Despite resistance from Populists and Progressives, the gap between rich and poor had widened throughout the early twentieth century. In the aggregate, Americans were better off in 1929 than in 1919 and both production and consumption had grown. Per capita income had risen 10% for all Americans in the 1920s, but 75% for the wealthiest. The return of conservative politics in the 1920s had reinforced federal policies that exacerbated this divide. High import tariffs, low corporate and personal taxes, easy credit and low interest rates overwhelmingly favored wealthy investors who spent their money on luxury goods and speculative investments in the rapidly rising stock market.
I found this information really surprising. It’s shocking that while the stock market was doing well, most Americans weren’t getting richer. The gap between the rich and poor was huge, with the wealthiest seeing their incomes rise a lot while everyday people struggled. It’s hard to believe that the government supported policies that helped the rich even more. This shows that a strong economy doesn’t mean everyone is doing well, and we need to pay attention to these inequalities to prevent future problems.
-
The exact causes of the Stock Market Crash that began the Great Depression is still being debated by economists and historians, but most agree that a huge speculative bubble had formed during the Roaring Twenties. Although most Americans had little savings and only the richest 2.5 percent invested in stocks, those who did often borrowed to do so. Most stock purchases were made on “margin”, which meant shares could be bought with money borrowed from brokers. Often, margin accounts allowed buyers to borrow 90% to 95% of the money they needed to complete a transaction. That meant a speculator could buy $1,000 in shares for $50 or $100. This was a great deal if the value of the shares rose quickly. If a trader could make a 10% gain on $1,000 in shares (or $100) that had only cost her $50 and a couple of dollars in interest on the loan, she would be way ahead. And share prices seemed to be rising steadily. One reason for this, of course, was the demand generated by all this margin buying, which also meant that everybody was able to buy ten to twenty times more shares than they could actually afford.
I found it surprising how easily people could buy stocks with borrowed money during the 1920s. They only needed to put down a small percentage, like $50 to buy $1,000 worth of shares. It worked well while prices went up, but when the market dropped, they couldn’t pay back their loans, which helped cause the big crash. It’s shocking how this risky system led to the stock market collapse and eventually the Great Depression.
-
Although the belief that economic prosperity was universal was exaggerated at the time and has been overstated by many historians, excitement over the stock market and the possibility of making speculative fortunes permeated popular culture in the 1920s. A Hollywood musical, High Society Blues, captured the hope of instant prosperity. Ironically, the movie didn’t reach theaters until after the market crash. “I’m in the Market for You,” a musical number from the film, used the stock market as a metaphor for love: You’re going up, up, up in my estimation / I want a thousand shares of your caresses, too / We’ll count the hugs and kisses / When dividends are due / ’Cause I’m in the market for you. But just as the song was being recorded in 1929, the stock market reached its peak, crashed, and brought an abrupt end to the seeming prosperity of the Roaring Twenties. The Great Depression had arrived.
I found this pretty funny! The idea of using stock market terms for love is clever, but it’s ironic that the song came out just before the stock market crashed. It’s surprising how quickly the mood changed from excitement to the Great Depression.
-
Despite the unprecedented actions he took in his first year in office, Franklin Roosevelt’s approach to combatting the Great Depression was not unanimously supported. Some critics found FDR’s relief programs too conservative. He had been careful to work within the limits of presidential authority and congressional cooperation. And unlike Europe, where several nations had turned toward state-run economies, fascism, and socialism, Roosevelt’s New Deal showed his reluctance to radically alter America’s foundational economic and social structures.
Roosevelt's use of presidential authority to bring about long-lasting change is interesting.
-
When he was nominated as the Democratic Party’s presidential candidate in July 1932, Roosevelt promised, “a new deal for the American people.” Newspaper editors seized on the phrase “new deal,” and it became shorthand for Roosevelt’s program to address the Great Depression. Roosevelt crushed Hoover, winning more counties than any previous candidate in American history. He spent the months between his election and inauguration traveling, planning, and assembling a team of advisors which became famous as Roosevelt’s “Brain Trust” of academics and experts. On March 4, 1933, in his first inaugural address, Roosevelt declared, “This great Nation will endure as it has endured, will revive and will prosper. So, first of all, let me assert my firm belief that the only thing we have to fear is fear itself—nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance.” In his first days in office, Roosevelt and his advisors prepared, submitted, and passed laws designed to halt the worst effects of the Great Depression. His administration threw the federal government headlong into the fight against the Depression.
I'm glad president franklin got majority voted because i felt like Herbert hoover would've done worse for the country.
-
As the United States slid deeper into the Great Depression, individuals, families, and communities faced the frightening and bewildering failure of institutions on which they had depended. The fortunate were spared the worst effects, and a few even profited from it, but by the end of 1932 the crisis had become so deep and so widespread that most Americans had suffered directly. Facing unemployment and declining wages, Americans slashed expenses. The rich could survive by simply deferring vacations and regular consumer purchases. Middle- and working-class Americans might rely on credit at neighborhood stores, default on utility bills, or skip meals. Those who could borrowed from relatives or took boarders into their homes. Many poor families “doubled up” in tenements. The most desperate camped on public lands in “Hoovervilles,” spontaneous shantytowns that dotted America’s cities, depending on bread lines and street-corner peddling. The emotional and psychological shocks of unemployment only added to the material difficulties of the Depression. Social workers and charity officials often found the unemployed suffering from feelings of futility, anger, bitterness, confusion, and shame. These feelings affected the rural poor as well as the urban.
I wonder when this whole time took place, I feel like it was the time when hoover was president, and I heard that he was a selfish corrupt president since he was not helping his people.
-
While most of the “Bonus Army” left Washington after the bill’s defeat, some stayed in protest. They were unemployed and homeless war veterans, but Hoover called the remaining protesters “insurrectionists” and ordered them to leave. When thousands ignored Hoover’s order, he sent General Douglas MacArthur. Accompanied by local police, the U.S. Army infantry, cavalry, tanks, and a machine gun squadron, MacArthur evicted the Bonus Army and burned the tent city. National media covered the disaster as troops attacked veterans, chased down men and women, tear-gassed children, and torched the shantytown. Several veterans were killed in the attack.
That's super crazy after how the old soldiers sacrificed their own life to help the country in war, just for them to get treated like this.
-
Sympathy for migrants, however, accelerated late in the Depression when Hollywood made a movie of The Grapes of Wrath.
I wonder what the difference from before and after the making of the movie was towards the sympathy for migrants. Was it the increased awareness of their struggles or the similarities between their struggles and US citizen's struggles of the time?
-
Other countries retaliated and tariff walls rose across the globe
Seems kind of obvious that other countries would want to retaliate if the US was charging so much for imports, while not having to pay nearly as much in exports.
-
They were unemployed and homeless war veterans, but Hoover called the remaining protesters “insurrectionists” and ordered them to leave.
This is such a horrible response to those in need. The veterans worked for the US to help it cause, and they're just being tossed aside.
-
The 1935 Social Security Act provided for old-age pensions, unemployment insurance, and economic assistance for both the elderly and dependent children.
This was the creation of Social Security Numbers right? Also how did it allow the elders to retire?
-
At the time of the stock market crash, southerners were already underpaid, underfed, and undereducated.
Out of context, but were the farmers/southerners still able to have pets, like dogs? I know that farmers usually have at least 1 dog? Or a Cat?
-
-
public.wsu.edu public.wsu.eduUntitled5
-
"Captain, shall I keep her making for that light north, sir?"
The light could be a symbol for the afterlife and by continuing on the group could be ultimately racing towards their end.
-
"What do you think of those life-saving people? Ain't they peaches?" "Funny they haven't seen us." "Maybe they think we're out here for sport! Maybe they think we're fishin'. Maybe they think we're damned fools."
could be a demonstration of natural selection by showing how despite the groups best efforts, they still fall short of safety
-
IT would be difficult to describe the subtle brotherhood of men that was here established on the seas. No one said that it was so. No one mentioned it. But it dwelt in the boat, and each man felt it warm him.
Tragedy will often bring an unlikely group together and allow them to bond.
-
A young man thinks doggedly at such times. On the other hand, the ethics of their condition was decidedly against any open suggestion of hopelessness. So they were silent. "Oh, well," said the captain, soothing his children, "we'll get ashore all right."
Creates a dark and cynical tone for the story and may foreshadow tragedy later on since they are all aware that there is a chance they will die at sea.
-
Many a man ought to have a bath-tub larger than the boat which here rode upon the sea. These waves were most wrongfully and barbarously abrupt and tall, and each froth-top was a problem in small boat navigation.
The rough sea environment coincides with the naturalism tendency to place the story in a harsh environment..
-
-
www.ribbonfarm.com www.ribbonfarm.com
-
successfully negotiating more money and/or power
or leave the co
-
cynically play out the now-illogical re-org anyway
need to have a bias for action and realize when there's a sunk cost fallacy
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
Of the four theorists reviewed above (Freud, Erikson, Piaget, and Vygotsky) which theorist’s ideas about development most closely match your own beliefs about how people develop and why?
All theory is a learning in the development of a child, therefore it helps us in the social, cultural, emotional, physical, cognitive growth, etc. For me, Vygolsky's theory is essential since through the cognitive we learn to develop within society also through our own experiences
-
How does the division of chores impact or not impact your household?
If all members of the household collaborated in household chores, it would be of great help to everyone because we are all putting in the time, effort and attitude to have a clean healthy home and work together.
-
What is the main role you have in your family system? What boundaries do you have or wish you had?
The main role I have in my family is to be a mother and a provider. I take care of my kids , help around the house, and bring income to the house. Some boundaries I wish I had is spending more time with my family, because communication and relationships is important to me, but I am always working, and my kids are always busy.
-
-
static1.squarespace.com static1.squarespace.com
-
The most common, mundane things of the body, the village, the earth-thesetoo, in the Indian mind, were suffused with a history of sacredness and power
This last line is very powerful. It affirms the rootedness of Native Americans in Californian lands as their beliefs and myths themselves were centered around the mundane things that were "the body, the village, the earth".
-
trange yet concrete figures ina magical ambience-the material that myths are made of is very close indeedto the material dreams are made of.
Often rare to find myths resemble more human, tangible features that we can relate to more.
-
According to this plan, people are going to be. Thereare gomg to be people on this earth. On this earth there will be plenty of foodfor the people! According to this plan there will be many different kinds offood for the people! Clover in plenty will grow, grain, acorns, nuts!"
I really love the idea of just "going to be." Like others have touched upon, this creation story is distinct from others I know about in that its intentions are plain and simple: they want the people to live freely and happily--as compared to having to earn happiness and basic necessities like food via strict customs and/or values.
-
Who made the water, the raft, the trinity of Earth-Creators? Like manyCalifornia creation epics, the Maidu account seems to begin in the middle ofthe story. Mysteriously, elements of the world seem to have always beenpresent, their existence apparently beyond question or speculation.
This creation story is interesting to me because it makes me wonder if the earth is being depicted as the "god" of the story. In most of the creation stories I am familiar with, the "god" of the story is the only thing present at the beginning, and it's existence is never really questioned. Earth Initiate does not appear to be an all-powerful being in this story, so I'm curious whether a "god" was present in their beliefs or not.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause. For example:
This is what basic principle we have learned in science class or experiment. The variable in the experiment should be seriously asked. No other factors are allowed to create chaotic results or bias. People's attention sometimes are attracted by absurd or strange news, so they are easy to spread those wrong messages.
-
It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause. For example:
This is what we have learned in science class and experiments. We should always make sure of the accuracy of the setting. No other elements or factors affect the result with errors. And this is also related to the content of the class I am taking this quarter: Calling Bullshit.
-
For example, social media data about who you are friends with might be used to infer your sexual orientation. Social media data might also be used to infer people’s:
Most of the time, social media always wanna gain information and track people's actions. The registration not only asks for basic contact information like email and phone number. Gender, interests, invite friends are the purpose of using social media and are included as unskippable steps.
-
-
en.wikipedia.org en.wikipedia.org
-
The plot is based on an Italian tale written by Matteo Bandello
Wow, I didn't know that Romeo and Juliet is from some other story!
-
-
envirodatagov.org envirodatagov.org
-
Americans now face energy scarcity
The Annual Energy Outlook (AEO) report by the US Energy Information Agency surveys long-term energy trends and is widely respected by traditional energy producers and analysts across many fields. The 2023 AEO concluded that "we project that the United States will remain a net exporter of petroleum products and natural gas through 2050 in all AEO2023 cases." Further, "As a result, in AEO2023, we see renewable generating capacity growing in all regions of the United States in all cases. Across all cases, compared with 2022, solar generating capacity grows by about 325% to 1019% by 2050, and wind generating capacity grows by about 138% to 235%. We see growth in installed battery capacity in all cases to support this growth in renewables. Across the span of AEO cases, relative to 2022, natural gas generating capacity ranges from an increase of between 20% to 87% through 2050." https://www.eia.gov/outlooks/aeo/narrative/index.php#ExecutiveSummary
-
a new energy crisis
This chapter provides no evidence that the US is experiencing an energy crisis. To wit: much evidence such as of record oil production suggests the opposite: https://www.forbes.com/sites/rrapier/2024/03/12/eia-confirms-historic-us-oil-production-record/
-
-
doc.cat-v.org doc.cat-v.org
-
If one profiles what is going on in this whole process, it becomes clear that I/O dominates. Of the cpucycles expended, most go into conversion to and from intermediate file formats.
-
-
learningcpp.org learningcpp.org
-
denominator * x
I believe only the numerator needs to be multiplied here.
-
-
www.youtube.com www.youtube.com
-
the essential feature of the implicate order is
B
-
noetic
Sensory and the noetic
-
modern science
The advent of
-
the axial age saw the incorporation of the inner dialogue into the human sense of self agency
Axial age
-
a word is the body of a concept a concept is the soul of a word
Well said
-
the ploma
Preloma?
-
the collective unconscious is in fact the implicate noetic realm
Hegel absolute spirit foreshadows this
-
participants in the Stream of becoming
Stream of becoming
-
the truths of science
X
-
our cognitive faculties
our cognitive faculties are imperfect machines which have been haphazardly assembled by the blind
watchmaker of algorithmic natural selection
-
-
www.lazaruscorporation.co.uk www.lazaruscorporation.co.uk
-
In his post Raw dog the open web! Jason says (quite correctly): www.fromjason.xyz Monoculture is winning. The Fortune 500 has shrink-wrapped our zeitgeist and we are suffocating culturally. But, we can fight back by bookmarking a web page or sharing a piece of art unsanctioned by our For Your Page. To do that we must get out there and raw dog that open web. In our current digital landscape, where a corporate algorithm tells us what to read, watch, drink, eat, wear, smell like, and sound like, human curation of the web is an act of revolution. A simple list of hyperlinks published under a personal domain name is subversive. Curation is punk.
I love how this blogpost creates a highlighted link to the original post which they're quoting along with the commanding words "View in context at www.fromjason.xyz".
-
-
www.reddit.com www.reddit.com
-
Does anyone know how do they make new platens?
reply to u/General-Writing1764 at https://old.reddit.com/r/typewriters/comments/1g7a8y5/does_anyone_know_how_do_they_make_new_platens/
I'm guessing that JJ Short is taking the original, removing the rubber. Placing the core into a mold and pouring in new material which hardens. Once done they put it on a lathe and turn it down to the appropriate (original) diameter. Potentially they're sanding the final couple of thousands of an inch for finish.
I'd imagine that if you asked them, they could/would confirm this general process.
The only other shop I've heard doing platen work is Bob at Typewriter Muse, but I haven't gone through his YouTube videos to see what his process looks like. (I'm pretty sure he documents some of it there.)
-
-
thewasteland.info thewasteland.info
-
Dry bones can harm no one.
The assertion that these bones "can harm no one" introduces a paradox. While they signify death and the end of vitality, their inertness suggests that the past cannot actively disrupt the present. This resonates with the broader themes of the poem, which often grapple with the weight of history and the haunting presence of memory. In a world characterized by despair, the line hints at a resigned acceptance of the past’s inability to inflict further harm, positioning it as a relic rather than an active force.
-
Shantih shantih shantih
The repetition of "Shantih shantih shantih" in the final lines of "The Waste Land" functions as a continuum rather than a concrete ending, embodying a search for peace amidst chaos. The term, meaning "the peace that passeth understanding," resonates deeply with the poem’s overarching themes of fragmentation and despair.
This triplet of peace is both a culmination and an invitation, suggesting that true tranquility may lie not in resolution but in the ongoing quest for harmony. Instead of providing a definitive conclusion, the repetition creates a rhythmic pulse that echoes throughout the poem, reminiscent of a mantra. It invites the reader to contemplate the cyclical nature of existence—where endings lead to new beginnings.
-
I do not know whether a man or a woman
The line "I do not know whether a man or a woman" emerges from the section "A Game of Chess," which delves into the disintegration of human connection. This line signifies the speaker’s existential uncertainty, reflecting a world where traditional gender roles and identities have become muddied and irrelevant.
Here, the ambiguity echoes the broader themes of alienation and fragmentation, as the characters struggle to communicate and connect. The uncertainty of gender mirrors the breakdown of personal relationships, suggesting that in a chaotic, post-war landscape, even the most fundamental aspects of identity are in flux. This disorientation emphasizes the emotional paralysis faced by individuals, reinforcing the haunting sense of isolation pervading the poem.
-
Who is the third who walks always beside you?
In T.S. Eliot's "The Waste Land," the line "Who is the third who walks always beside you?" evokes a haunting presence, layering the poem with existential uncertainty and a sense of companionship laced with disquiet. From the speaker's point of view, this line captures an intimate yet unsettling inquiry that transcends the immediate relationships, hinting at a spiritual or existential specter that accompanies the living.
The phrase suggests an ambiguous, almost spectral companionship—one that suggests both intimacy and alienation. The “third” figure implies a triangular relationship, where the speaker and another are not alone; instead, they are shadowed by an elusive presence. This presence is not just a literal figure but symbolizes collective trauma, history, or perhaps the weight of modern disillusionment. It evokes a sense of haunting, as if the past—whether personal or cultural—lingers ominously, shaping the present.
Moreover, the use of “who” implies a search for identity, suggesting that the speaker grapples with understanding not just the presence of this third entity, but also their own place within the existential landscape of the poem. The question is both a plea and a probe, inviting readers to ponder the nature of companionship in a fractured world. The haunting nature of this inquiry lies in its open-endedness; it suggests that the answer may elude the speaker, reinforcing the poem's overarching themes of fragmentation and despair.
Ultimately, this line encapsulates the haunting complexity of human experience—where the past, the present, and the metaphysical intertwine, leaving the speaker (and the reader) in a state of reflective disquiet. The third figure symbolizes both loss and the ongoing search for meaning, a companion that walks with us, whether we acknowledge it or not.
-
whirlpool.
The whirlpool contrasts with moments of stillness and clarity in the poem. It underscores the tension between chaos and order, reflecting the desire for meaning in a fragmented world. The whirlpool serves as a reminder of the relentless motion of time and the challenges of finding stability.
-
The river sweats Oil and tar
The lines "The river sweats / Oil and tar" reflect the industrial pollution of the environment and symbolizes the decay and corruption present in modern life. The river, typically a symbol of life and renewal is assigned a certain vitality and is transformed into a site of contamination, highlighting themes of desolation and moral decline in the post-war world.
-
Twit twit twit Jug jug jug jug jug jug So rudely forc’d. Tereu
In "The Waste Land," the lines "Twit twit twit / Jug jug jug jug jug jug / So rudely forc'd" evoke a jarring and fragmented sense of communication, drawing from the myth of Tereus, Procne, and Philomela. This reference introduces themes of violence, loss, and the disruption of natural order. The repetition of "twit" and "jug" creates a rhythmic yet unsettling sound, almost mocking in its simplicity. It highlights the stark contrast between the complexity of human emotion and the reduced, animalistic quality of the sounds. This mirrors the broader themes of disconnection and alienation throughout the poem. The reference to Tereus—who brutally silenced Philomela by cutting out her tongue—serves as a potent metaphor for silencing and trauma. In this context, the nymphs and their experiences are connected to loss and violence, underscoring the idea that beauty and vitality are often subjected to brutal realities.
-
departed.
The indentation of “departed” draws attention to the unusual experience of the nymphs, who traditionally symbolize beauty, love, and the natural world, often associated with life and abundance. However, in Eliot’s context, their presence serves to contrast the barrenness and emptiness of modern existence. Also, decapitalizing “departed” shifts the agency of the myths and implies a more passive experience as they have been swept away and lost without active control over their fate. This loss of agency aligns with the themes in "The Waste Land," where characters often feel powerless in the face of societal decay and personal disillusionment. The experience of the nymphs can be interpreted as a reflection of unfulfilled longing and the impact of a fragmented society on intimate relationships. Instead of celebrating love and connection, their references evoke a sense of nostalgia for a more vibrant, meaningful past that has been lost. This mirrors the sorrow expressed in Psalm 137, where the Israelites long for their homeland, suggesting a universal longing for wholeness and the deep human need for connection.
Ultimately, the nymphs' experience in "The Waste Land" draws attention to the contrast between the idealized past and the stark reality of the present, reinforcing the poem's exploration of loss, longing, and the search for identity in a desolate world.The line "Departed, have left no addresses" from "The Waste Land" resonates deeply with the themes in Psalm 137, particularly the sense of dislocation and absence. In Psalm 137, the Israelites lament their exile in Babylon, feeling disconnected from their homeland and traditions. The line evokes a profound sense of loss and the inability to return to a place of belonging, mirroring the mournful sentiment of having no way to communicate or reconnect with what has been left behind. Both texts express a longing for something lost and the pain of separation, emphasizing the emotional weight of exile. Just as the Israelites mourn their captivity and the destruction of their identity, Eliot's line suggests a broader existential crisis where individuals feel untethered in a fragmented world, underscoring the despair and disconnection prevalent in both works.
-
HURRY UP PLEASE ITS TIME
Eliot artfully weaves imagery and language that evokes quietude into the fabric of the poem, creating a body of work whose essence personifies forms of silence. The poem possesses a hushed quality, behaving similarly to a curse word. As if to engage and think with the poem is taboo. Yet, when read, the assemblage of fragmented imagery, allusions, ambiguous language and voice, or lack thereof, engenders a profusion of sound. Eliot’s use of syntax in “A Game of Chess” depicts the unexpected resonance of unsaid speech, drawing attention to the hidden yet audible nature of cognition. The capitalization of “HURRY UP PLEASE IT’S TIME,” a noticeable shift from the earlier lowercase dialogue, intends to evoke a semblance of sound while maintaining the generally quiet disposition of the poem. Eliot's interplay with cognition and sound probes the potency of unsaid speech, revealing how the silence between words carry as much meaning as spoken language itself, inviting readers to consider the depths of thought and emotion that lie beneath the surface of expression.
-
The Chair she sat in, like a burnished throne,
I am drawn to the parallels between T.S. Eliot’s The Waste Land and Baudelaire’s “A Martyred Woman,” particularly their shared exploration of the suffering and sacrifice of women. Both works present women as embodiments of beauty intertwined with pain. In Baudelaire’s poem, the “martyred woman” is depicted as suffering yet noble, while Eliot’s female characters often reflect a sense of despair and emotional turmoil despite their allure. Baudelaire explicitly frames women as martyrs, suggesting that their beauty is a source of suffering. Similarly, Eliot’s portrayal of women suggests that they endure personal sacrifices and struggles, often reflecting broader societal issues. This martyrdom emphasizes the emotional toll placed on women. Both poets critique the societal roles imposed on women. Baudelaire highlights how women are idealized yet subjected to suffering, while Eliot’s women often navigate a fragmented identity within a patriarchal context, exposing the emptiness behind romanticized notions of femininity. In both texts, women experience deep alienation. Baudelaire's martyred figures are isolated in their suffering, while Eliot’s women, such as Lil or the clairvoyante, illustrate the emotional disconnect prevalent in modern life, reinforcing feelings of loneliness and despair.
-
'That corpse you planted last year in your garden,
Baudelaire juxtaposes the beauty of art and nature with the harsh realities of life, often reflecting on the dualities of pleasure and suffering. The poems frequently capture the essence of modern urban life, particularly in Paris, highlighting the alienation and moral ambiguity found in the city. Baudelaire delves into themes of vice and corruption, examining how they coexist with beauty. He often portrays sin as an integral part of human nature. Despite the dark themes, there are moments of seeking transcendence through art, love, and spirituality, hinting at the possibility of redemption amid despair. Interestingly, Baudelaire positions the poet as a visionary who can perceive the deeper truths of existence, navigating the complexities of the human condition.
The line "that corpse you planted last year in your garden" embodies themes of beauty and decay; the imagery of the corpse juxtaposed with the idea of a garden symbolizes the intersection of life and death. It suggests that what might typically be seen as beautiful (a garden) is tainted by decay and mortality. This line hints at buried past sins or traumas, implying that the speaker is grappling with unresolved issues that refuse to remain hidden. The corpse can symbolize guilt or repressed memories that disrupt the facade of normalcy. The garden, often a symbol of natural beauty and cultivation, contrasts sharply with the idea of a corpse. This reflects the alienation and spiritual emptiness of modern life, where even beauty is intertwined with death. The act of planting a corpse can be seen as a perverse twist on the natural cycle of life, suggesting a disruption in the natural order. It points to the theme of regeneration but in a way that is grotesque and unsettling. This line encapsulates Eliot’s task of confronting uncomfortable truths. It suggests that to understand the modern condition, one must acknowledge the darker aspects of existence.
-
from the hyacinth garden,
Eliot weaves themes of beauty, love, and loss inspired by the story of Apollo and Hyacinth into the fabric of “The Waste Land,” particularly the cycles of life and death, the transient nature of beauty, and the emotional desolation of the modern world. The tale of Apollo, the god of light and music, and Hyacinth, his beloved, emphasizes the intensity of love and the tragedy of loss. Hyacinth's death, caused by an accidental injury from Apollo’s discus, illustrates how beauty can be fleeting and how love can lead to deep sorrow. In the myth, Hyacinth is transformed into a flower after his death, symbolizing the idea of regeneration. However, in "The Waste Land," this regeneration is complicated by the poem’s pervasive sense of despair and fragmentation. The cycles of life and death are depicted, but they often feel broken or unfulfilled. Eliot contrasts the mythic beauty of Apollo and Hyacinth with the barrenness of the modern world. The decorated imagery of the myth serves to heighten the bleakness of contemporary existence, where love and beauty seem diminished or lost amidst urban decay and spiritual emptiness. The reference to this myth also connects to the broader cultural and literary heritage that Eliot draws upon throughout "The Waste Land." It reflects his engagement with themes of mythology, art, and the human condition, suggesting that ancient stories continue to resonate, even in a fractured modern context.
-
Quando fiam uti chelidon
10.18
Does “The Waste Land” end on a positive note? In debating with myself, I found my answer to remain hopelessly inconclusive. In the final section of the poem, it seems that our protagonist, in a role similar to a quester, has finally arrived at the Waste Land’s “Chapel Perilous” following the hopeful “violet hour” (380). Still, readers are left clueless regarding whether the desired task of regeneration has been completed. In what seems to be the most climactic scene, a rooster announces the arrival of rain from the chapel rooftop, yet two details keep me unnerved about this resolution:
Firstly, where on Earth did the rain go? The “damp gust” is responsible for “bringing [the] rain,” yet this action is trapped in an unfinished, infinitive state (394-5). In fact, the “black clouds,” confined in a distant mountain chain, can never rejuvenate the withering land in the riverbanks and valleys (397).
In addition, the cock, the announcer of the rain, is itself heavily connected to the uncertain state between life and death. Firstly, the animal figures in Ariel’s song “Hark, hark! I hear / [...] Cry, Cock a diddle dow” in Shakespeare’s Tempest, which brings to mind the fabricated death of Alonso, King of Naples. Secondly, the word is mentioned in another Shakespearian play, Hamlet, in the specific context of King Hamlet’s appearance as a ghost (ghost-hood and fabricated deaths suggest a similar border state between life and death). This brings even greater uncertainty regarding the cock’s ability of announcing/directing genuine revitalization.
This sense of incompletion persists until the very last stanza, in which border states, including the shore that the speaker sits at (between water and land) and the London Bridge (between life and death/Inferno), figure heavily. In addition, the insufficiency of Philomela’s transformation is emphasized once again. The line “quando fiam uti chelidon” merely anticipates a future gaining of a voice similar to that of the swallow’s, yet the task is essentially unfulfillable – while both sexes of the swallow can sing, only the male nightingale sings (429). Philomela’s metamorphosis still does not liberate her from her silence, a reminder of her subjugation. It is, once again, an incomplete renewal at best.
-
falling down falling down falling down
This is one of many times in the poem where repetition like this occurs. This is similar to "The Vigil of Venus" where the line "Tomorrow may loveless, may tomorrow make love" is repeated several times throughout the poem. Interestingly, the line itself is almost repetition but not quite, which makes the idea of love in the poem feel like an ever-changing thing that isn't stagnant. Meanwhile, "The Waste Land"'s use of "falling down falling down falling down," through its insistent and exact repetition, seems to show an action that cannot be undone and is damaging, like the London Bridge falling down.
-
My friend
In Angela’s annotation for this line, she interrogates the true nature of friendship, claiming that friendship in “The Waste Land” appears in relation to “indifference” and “superficiality” (Li). She cites Bradley as one of her sources, specifically, "a common understanding being admitted, how much does that imply? What is the minimum of sameness that we need suppose to be involved in it?" (Bradley, 6). The word “understanding” specifically caught my attention, as it is central to the Brihadaranyaka Upanishad. This line of “The Waste Land” is in reference to the part of the Upanishad that means “give”: “Then the human beings said to him, ‘Teach us, father.’ He spoke to them the same syllable DA. ‘Did you understand?’ ‘We understood,’ they said. ‘You told us, “Give (datta)”’” (Brihadaranyaka, Chapter 2). Yet, although the humans were instructed to give, Eliot appears to extend this scene, resuming it when the humans reflect upon the past, asking “what have we given?”
The deception and failure of friendship that Angela identifies as it relates to this line may also provide an answer to the shortcomings of the humans to “give.” Before the line Angela quotes, Bradley states, “what, however, we are convinced of, is briefly this, that we understand and, again, are ourselves understood” (Bradley, 6). Very clearly, Bradley accuses the human race of being under an illusion of understanding one another. If they are under the illusion of understanding, then the credibility of the humans in the Upanishad is completely undermined when they say that they “understand” what datta means. Possibly, they misunderstand what it means to “give,” or, Eliot may be making the claim that they misunderstood the meaning of datta itself as it exists in the universe of the poem. With this in mind, it makes sense that the humans are unable to point to what they’ve given in “The Waste Land.” They are left without direction, and, according to Bradley, they are condemned to failure in connecting, or “giving” themselves to one another. Even “my friend” implies an antithesis to “give”--possession. Eliot seems to agree with Bradley’s proposal that friendship, relationship, true exchange between one person and another is something beyond human understanding.
-
Only at nightfall, aethereal rumours Revive for a moment a broken Coriolanus
Coming back to what I said in a previous annotation about actions getting darker as night comes, this seems to flip that idea on its head a bit when saying "Only at nightfall, aethereal rumours / Revive for a moment a broken Coriolanus". Coriolanus is a Shakespeare character who is notably a bit of an antihero, so these lines seem to say that "aethereal rumors" at nightfall are what temporarily redeem Coriolanus, despite a previous annotation of mine arguing that peoples' actions get darker as the night falls. For Coriolanus, it seems to be the opposite.
This is also interesting when you consider Francis Herbert Bradley's Appearance and Reality where he argues that much of what humans perceive is an illusion, which makes it hard for people to truly connect with each other. This makes me wonder if these "aethereal rumours" are then actually other people and not supernatural beings, but Eliot is referring to them this way to show the true distance between ourselves and the reality of other people.
-
Who is the third who walks always beside you?
Both this stanza and P. Marudanayagum's "Retelling of an Indian Legend" deal with a mysterious other. In the legend, the vial (verandah) has enough space for one person to lie on, two people to sit on, or three people to stand on. Once three people are standing on the vial, they feel a fourth presence but don't know who it is, before realizing it's Lord Vishnu (a Hindu God). Following the logic of this legend, a mysterious presence in a space where it's not physically possible for the presence to fit inside is probably a God or other supernatural thing. However, this stanza shows two, not three, people that are standing, and their space isn't limited, but there's also a mysterious presence. There's definitely a lot to unpack here, and I'd welcome any theories about it, but I desperately need to go to sleep and can't properly theorize at this point.
-
Quando fiam uti chelidon—O swallow swallow
The 6th line of Eliot’s final stanza in “The Waste Land” reads, “Quando fiam uti chelidon”, or “when shall I be as the swallow”. This line was taken from Pervigilium Veneris, translated by Allen Tate, which recalls the story of Philomena, an Athenian princess who was raped by a king, and later turned into a bird. In order to gain a better sense of Eliot’s reference, we can look at it in the context of the stanza in the Pervigilium Veneris, which reads “She sings, we are silent. When will my spring come? Shall I find my voice when I shall be as the swallow? … Silent, I lost the muse. Return, Apollo!”. The mention of spring harkens back to the beginning of “The Waste Land”, where spring plays a major theme. In the Pervigilium Veneris, Philomena attributes spring to herself, calling it “my spring”, suggesting that spring represents her own rebirth and restoration. Thus, we might be able to interpret Eliot’s “spring” in a similar manner. Philomena’s seeking out of her voice is also interesting in terms of “The Waste Land”, which is built on fragmented dialogue and ever changing voices. Interestingly, Philomena seems to have lost “the muse”, or the divine inspiration, and in frustration, she calls out to Apollo to inspire her once again. Eliot, through his biblical references and prayers seems to be calling out to the divine, perhaps for his own inspiration as well. Another significant part of the Pervigilium Veneris are the repeating lines, “Tomorrow may loveless, may lover tomorrow make love.” Through these repeating and ambiguous lines, the reader can get a sense of the future, and the contrast between lovelessness and making love in that future. The word “may” expresses possibility, but can also be interpreted as expressing a wish, or hope. At the final stanza, this phrase shifts into, “Tomorrow let loveless, let lover tomorrow make love.” The newly introduced word, “let”, seems to acknowledge how fate is in the hands of the gods, as it is more of a direct expression of desire. Ultimately this repetition and prayer falls in line with similar repetitions such as “HURRY UP IT IS TIME” in “The Waste Land”, suggesting Eliot’s intensifying attempts at communication with the divine.
-
We think of the key, each in his prison Thinking of the key, each confirms a prison Only at nightfall, aethereal rumours
While reading this stanza of “What the Thunder Said”, I instantly connected Eliot’s mention of aethereal rumors to “Appearance and Reality” by Francis Herbert Bradley. Bradley’s philosophical essay attempts to examine and explain interactions between souls. In particular, Bradley mentions ether while discussing the possibility of direct communication through souls ( as in soul-to-soul communication without the use of bodies). Bradley explains that this communication would occur by ‘a medium extended in space, and of course, like “ether,” quite material.”. Thus ether, while material, is equated to the direct impressions on one soul from another. With this understand of ether, we can interpret “ethereal rumors” to be ones not concerned with the external environment or human bodies, rather, spiritual messages that transcend the normal methods of bodily communication, such as the voice. However, Bradley seems to doubt the existence of this ethereal communication, and proceeds to worry, stating “If such alterations of our bodies are the sole means which we posses for conveying what is in us, can we be sure that in the end we really have conveyed it?”. Essentially, Bradley shares his fears that humans are unable to fully represent their souls through their bodies. Interestingly, Eliot’s two previous lines seem to evoke a similar notion of distorted communication between souls. Eliot states, “We think of the key, each in his prison// Thinking of the key, each confirms a prison”. In these lines, the people’s thoughts are collective and similar, but each individual has his own prison. When regarding the word “key”, one might think of a physical key to the prison, however, I argue that the word “key”, instead, refers to the ethereal communication between souls discussed by Bradley. A key is defined as “a thing that provides a means of understanding something”, such as “the key to the code”, or “the key to the riddle”. With this understanding of a key, we can interpret Eliot’s prisons as what Bradley would describe as limits of the bodily expression of the soul. These prisons seem to be “affirmed” by the existence of this “key”, which might represent another concern that the bodily methods of communication are only seen as limits due to the yearning for ethereal soul-to-soul communication.
-
-
drive.google.com drive.google.com
-
moleculas del sistema inmune en tortugas: Lisozymas con actividad antibacteriana y otrs moleculas son catelicidinas con actividad antifungica yactividad antibacteriana, incluso mas potente que farmacos ampicilina y bencilpenicilina. En el cocodrilo siames se aislo una pequeña proteina cationica 2008). A small cationic protein was isolated from the Siamese crocodile (Crocodylus siamensis), which demonstrated antibacterial activity against S. typhi, E. coli, S. aureus, Staphylococcus epidermidis, K. pneumoniae, P. aeruginosa and Vibrio chorelae (Preecharram et al., 2008). These antimicrobial peptides offer potent protection for reptiles against infection as well as provide exciting opportunities in the search for new clinical or agricultural antibiotics.
-
Se han descrito defensinas con actividad antibacteriana against Escherichia coli and Salmonella typhimurium as well as antiviral activity against the Chandipura virus (Chattopadhyay et al., 2006). The first - defensin from reptilian leukocytes was recently isolated from the European pond turtle Emys orbicularis. Known as TBD-1, the peptide demonstrated strong activity against E. coli, Listeria monocytogenes, Candida albicans and methicillin-resistant Staphylococcus aureus (Stegemann, 2009).
-
Reptiles are the only ectothermic amniotes, and therefore become a pivotal group to study in order to provide important insights into both the evolution of the immune system as well as the functioning of the immune system in an ecological setting.
-
-
www.dailymaverick.co.za www.dailymaverick.co.za
-
Clash of the Cartels: Unmasking the global drug kingpins stalking South Africa.
for - book - Clash of the Cartels: Unmasking the global drug kingpins stalking South Africa - Caryn Dolley - Columbia drug trafficking in South Africa
-
Why you don’t see it is because it’s subtle, very sophisticated and it is a massive business.
for - quote - organized crime in Cape Town
quote - organized crime in Cape Town - Andre Lincoln - Caryn Dolley - (see below) - Why you don’t see it is because it’s subtle, very sophisticated and it is a massive business. - How many restaurants and clubs on these famous streets are paying protection money to criminals? It's pretty startling - And what about construction shakedowns? 63 billion Rand of projects impacted in 2019 - https://hyp.is/Smjb3I5CEe-fXHsx-Sy8kQ/www.inclusivesociety.org.za/post/overview-of-the-construction-mafia-crisis-in-south-africa
-
If you were to go down Sea Point main road, or into town into Long Street or Kloof Street, all those restaurant or club owners contribute to organised crime regularly. Most of them, unwillingly, but they have no other option. And they have no other option because of the way organised crime works,” said Lincoln.
for - organized crime - Cape Town - hidden protection scheme - Andre Lincoln
-
for - polycrisis - organized crime - Daily Maverick article - organized crime - Cape Town - How the state colludes with SA’s underworld in hidden web of organised crime – an expert view - Victoria O’Regan - 2024, Oct 18 - book - Man Alone: Mandela’s Top Cop – Exposing South Africa’s Ceaseless Sabotage - Daily Maverick journalist Caryn Dolley - 2024 - https://viahtml.hypothes.is/proxy/https://shop.dailymaverick.co.za/product/man-alone-mandelas-top-cop-exposing-south-africas-ceaseless-sabotage/?_gl=11mkyl5s_gcl_auODI2MTMxODEuMTcyNjI0MDAwMg.._gaNzQ5NDM3NzE0LjE3MjMxODY0NzY._ga_Y7XD5FHQVG*MTcyOTM1MjgwOS4xLjAuMTcyOTM1MjgxOS41MC4wLjkyNTE5MDk2OA..
summary - This article revolves around the research of South African crime reporter Caryn Dolley on the organized web of crime in South Africa - She discusses the nexus of - trans-national drug cartels - local Cape Town gangs - South African state collusion with gangs - in her new book: Man Alone: Mandela's Top Cop - Exposing South Africa's Ceaseless Sabotage - It illustrates how on-the-ground efforts to fight crime are failing because they do not effectively address this criminal nexus - The book follows the life of retired top police investigator Andre Lincoln whose expose paints the deep level of criminal activity spanning government, trans-national criminal networks and local gangs - Such organized crime takes a huge toll on society and is an important contributor to the polycrisis. - Non-linear approaches are necessary to tackle this systemic problem - One possibility is a trans-national citizen-led effort
Tags
- quote - organized crime in Cape Town
- trans-national drug cartels - South Africa - Colombia - Serbia
- book - Man Alone: Mandela’s Top Cop – Exposing South Africa’s Ceaseless Sabotage - Daily Maverick journalist Caryn Dolley - 2024
- Daily Maverick article - organized crime - Cape Town - How the state colludes with SA’s underworld in hidden web of organised crime – an expert view - Victoria O’Regan - 2024, Oct 18
- construction mafia stats - South Africa
- book - Clash of the Cartels: Unmasking the global drug kingpins stalking South Africa - Caryn Dolley
- polycrisis - organized crime
- organized crime - Cape Town - hidden protection scheme - Andre Lincoln
Annotators
URL
-
-
www.inclusivesociety.org.za www.inclusivesociety.org.za
-
In 2019, at least 183 infrastructure and construction projects worth more than R63-billion had been affected by the construction mafia.
for - stats - construction mafia impacts - South Africa - 2019 - R63 billion - Overview of the Construction Mafia Crisis in South Africa - Inclusive Society Institute - 2023
Tags
Annotators
URL
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important work advances our understanding of parabrachial CGRP threat function. The evidence supporting CGRP aversive outcome signaling is solid, while the evidence for cue signaling and fear behavior generation is incomplete. The work will be of interest to neuroscientists studying defensive behaviors.
-
Reviewer #1 (Public Review):
Summary
The authors asked if parabrachial CGRP neurons were only necessary for a threat alarm to promote freezing or were necessary for a threat alarm to promote a wider range of defensive behaviors, most prominently flight.
Major Strengths of Methods and Results
The authors performed careful single-unit recording and applied rigorous methodologies to optogenetically tag CGRP neurons within the PBN. Careful analyses show that single-units and the wider CGRP neuron population increases firing to a range of unconditioned stimuli. The optogenetic stimulation of experiment 2 was comparatively simpler but achieved its aim of determining the consequence of activating CGRP neurons in the absence of other stimuli. Experiment 3 used a very clever behavioral approach to reveal a setting in which both cue-evoked freezing and flight could be observed. This was done by having the unconditioned stimulus be a "robot" traveling along a circular path at a given speed. Subsequent cue presentation elicited mild flight in controls and optogenetic activation of CGRP neurons significantly boosted this flight response. This demonstrated for the first time that CGRP neuron activation does more than promote freezing. The authors conclude by demonstrating that bidirectional modulation of CGRP neuron activity bidirectionally affects freezing in a traditional fear conditioning setting and affects both freezing and flight in a setting in which the robot served as the unconditioned stimulus. Altogether, this is a very strong set of experiments that greatly expand the role of parabrachial CGRP neurons in threat alarm.
Weaknesses
In all of their conditioning studies the authors did not include a control cue. For example, a sound presented the same number of times but unrelated to US (shock or robot) presentation. This does not detract from their behavioral findings. However, it means the authors do not know if the observed behavior is a consequence of pairing. Or is a behavior that would be observed to any cue played in the setting? This is particularly important for the experiments using the robot US.
The authors make claims about the contribution of CGRP neurons to freezing and fleeing behavior, however, all of the optogenetic manipulations are centered on the US presentation period. Presently, the experiments show a role for these neurons in processing aversive outcomes but show little role for these neurons in cue responding or behavior organizing. Claims of contributions to behavior should be substantiated by manipulations targeting the cue period.
Appraisal
The authors achieved their aims and have revealed a much greater role for parabrachial CGRP neurons in threat alarm.
Discussion
Understanding neural circuits for threat requires us (as a field) to examine diverse threat settings and behavioral outcomes. A commendable and rigorous aspect of this manuscript was the authors decision to use a new behavioral paradigm and measure multiple behavioral outcomes. Indeed, this manuscript would not have been nearly as impactful had they not done that. This novel behavior was combined with excellent recording and optogenetic manipulations - a standard the field should aspire to. Studies like this are the only way that we as a field will map complete neural circuits for threat.
-
Reviewer #2 (Public Review):
-Summary of the Authors' Aims:<br /> The authors aimed to investigate the role of calcitonin gene-related peptide (CGRP) neurons in the parabrachial nucleus (PBN) in modulating defensive behaviors in response to threats. They sought to determine whether these neurons, previously shown to be involved in passive freezing behavior, also play a role in active defensive behaviors, such as fleeing, when faced with imminent threats.
-Major Strengths and Weaknesses of the Methods and Results:<br /> The authors utilized an innovative approach by employing a predator-like robot to create a naturalistic threat scenario. This method allowed for a detailed observation of both passive and active defensive behaviors in mice. The combination of electrophysiology, optogenetics, and behavioral analysis provided a comprehensive examination of CGRP neuron activity and its influence on defensive behaviors. The study's strengths lie in its robust methodology, clear results, and the multi-faceted approach that enhances the validity of the findings.
No notable weakness found.
-Appraisal of Aims and Results:<br /> The authors successfully achieved their aims by demonstrating that CGRP neurons in the PBN modulate both passive and active defensive behaviors. The results clearly show that activation of these neurons enhances fear memory and promotes conditioned fleeing behavior, while inhibition reduces these responses. The study provides strong evidence supporting the hypothesis that CGRP neurons act as a comprehensive alarm system in the brain.
-Impact on the Field and Utility of Methods and Data:<br /> This work has significant implications for the field of neuroscience, particularly in understanding the neural mechanisms underlying adaptive defensive behaviors. The innovative use of a predator-like robot to simulate naturalistic threats adds ecological validity to the findings and may inspire future studies to adopt similar approaches. The comprehensive analysis of CGRP neuron activity and its role in defensive behaviors provides valuable data that could be useful for researchers studying fear conditioning, neural circuitry, and behavior modulation.
-Additional Context:<br /> The study builds on previous research that primarily focused on the role of CGRP neurons in passive defensive responses, such as freezing. By extending this research to include active responses, the authors have provided a more complete picture of the role of these neurons in threat detection and response. The findings highlight the versatility of CGRP neurons in modulating different types of defensive behaviors based on the perceived intensity and immediacy of threats.
Overall, this manuscript makes a significant contribution to our understanding of the neural basis of defensive behaviors and offers valuable methodological insights for future research in the field.
-
Reviewer #3 (Public Review):
Strengths:<br /> The study used optogenetics together with in vivo electrophysiology to monitor CGRP neuron activity in response to various aversive stimuli including robot chasing to determine whether they encode noxious stimuli differentially. The study used an interesting conditioning paradigm to investigate the role of CGRP neurons in the PBN in both freezing and flight behaviors.
Weakness:<br /> The major weakness of this study is that the chasing robot threat conditioning model elicits weak unconditioned and conditioned flight responses, making it difficult to interpret the robustness of the findings. Furthermore, the conclusion that the CGRP neurons are capable of inducing flight is not substantiated by the data. No manipulations are made to influence the flight behavior of the mouse. Instead, the manipulations are designed to alter the intensity of the unconditioned stimulus.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This study presents a valuable finding on the identification of a complex consisting of NHE1, hERG1, β1/integrin and NaV1.5 on the membrane of breast cancer cells. The evidence supporting the claims of the authors is somewhat incomplete. The inclusion of clarification of some experimental design and the amendment of cropping Western blot data would have strengthened the study. The work will be of interest to scientists working on breast cancer.
-
Reviewer #1 (Public Review):
This manuscript by Capitani et al. extends previous studies of ion channel expression in triple-negative breast cancer cell lines. Probing four phenotypically different breast cancer cell lines, they used co-IP and confocal immunofluorescence (IF) colocalization to reveal that beta1 integrin forms a complex with the neonatal form of the Na+ channel NaV1.5 (nNaV1.5) and the Na+/H+ antiporter NHE1 in addition to previously reported hERG1. They used siRNA to show that silencing beta1 results in a co-depletion of hERG and Nav1.5, further supporting the conclusion that they form a complex; a complementary enhancement of Na current with increased hERG expression was also demonstrated. These data compellingly describe a complex of membrane proteins unregulated in breast cancer and thus present novel potential targets for treatment.
There are several concerns with experimental approaches. How fluorescence measurements were compared and controlled among experiments was not described, and masks drawn to define membrane expression seemed arbitrary, and included in some cases large sections of cytoplasm. There are issues associated with the use of channel blocking agents and a bifunctional small-chain antibody that are not well rationalized. Why are they being used, to test what hypotheses or disrupt what processes? The extremely high concentrations of E-4031 (4000x IC50 for block), e.g., are not expected to have selective actions. The effects of E-4031 at high concentrations altering cytoskeleton properties associated with invasiveness (and thus cancer progression) are questionable. There are numerous problems with co-IPs together carried out together with knock-down, which in one case depleted the protein targeted by the primary IP antibody. Western blots (WB) were quantified by comparing treatment to control, which does not control for loading errors. The control and treated signals should be divided by the respective tubulin signals to control for loading errors. Then the treated value can be compared with the control.
-
Reviewer #2 (Public Review):
The manuscript by Chiara Capitani and Annarosa Arcangeli reports the identification of a complex comprising NHE1,hERG1, β1 integrin, and NaV1.5 on the plasma membrane of breast cancer cells. The authors further investigated the mutual regulatory interactions among these proteins using Western blotting and co-immunoprecipitation assays. They also examined the downstream signaling pathways associated with this complex and assessed its impact on the malignant behavior of breast cancer cells.
Strengths
The manuscript used different breast cancer cell lines and combined Western blot, immunostaining, and electrophysiology to provide evidence for the proposed complex. The inhibitors are also used to test the requirement of channel activity to function in the development of breast cancer cells with in-vitro studies.
Weaknesses
The data shown in this manuscript include the western blots that are cropped and imaged separately to draw conclusions about protein levels and changes in immunoprecipitation. These cannot be done on separate, cropped blots but must be imaged together to make these comparisons.
Antibodies used for hERG, NaV1.5 and β1 integrin must be validated to work for IP using KO or KD cell lines for the respective proteins to demonstrate specificity. The same goes for all the immunofluorescence imaging shown in the manuscript as these are all key pieces of data to support the conclusions.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
SGLT2 inhibitors (SGLT2i) have assumed important roles in reducing cardiovascular risk, particularly in those with diabetes. It has become appreciated that its protective effects are likely beyond their ability to lower blood sugar levels. This research presents a novel approach to studying the SGLT2i mechanism of action which is yet to be fully elucidated.
-
Reviewer #1 (Public Review):
The authors examined the hypothesis that plasma ApoM, which carries sphingosine-1-phosphate (S1P) and activates vascular S1P receptors to inhibit vascular leakage, is modulated by SGLT2 inhibitors (SGLTi) during endotoxemia. They also propose that this mechanism is mediated by SGLTi regulation of LRP2/ megalin in the kidney and that this mechanism is critical for endotoxin-induced vascular leak and myocardial dysfunction. The hypothesis is novel and potentially exciting. However, the author's experiments lack critical controls, lack rigor in multiple aspects, and overall does not support the conclusions.
-
Reviewer #2 (Public Review):
Apolipoprotein M (ApoM) is a plasma carrier for the vascular protective lipid mediator sphingosine 1-phospate (S1P). The plasma levels of S1P and its chaperones ApoM and albumin rapidly decline in patients with severe sepsis, but the mechanisms for such reductions and their consequences for cardiovascular health remain elusive. In this study, Ripoll and colleagues demonstrate that the sodium-glucose co-transporter inhibitor dapagliflizin (Dapa) can preserve serum ApoM levels as well as cardiac function after LPS treatment of mice with diet-induced obesity. They further provide data to suggest that Dapa preserves serum ApoM by increasing megalin-mediated reabsorption of ApoM in renal proximal tubules and that ApoM improves vascular integrity in LPS treated mice. These observations put forward a potential therapeutic approach to sustain vascular protective S1P signaling that could be relevant to other conditions of systemic inflammation where plasma levels of S1P decrease. However, although the authors are careful with their statements, the study falls short of directly implicating megalin in ApoM reabsorption and of ApoM/S1P depletion in LPS-induced cardiac dysfunction and the protective effects of Dapa.
The observations reported in this study are exciting and potentially of broad interest. The paper is well written and concise, and the statements made are mostly supported by the data presented. However, the mechanism proposed and implied is mostly based on circumstantial evidence, and the paper could be substantially improved by directly addressing the role of megalin in ApoM reabsorption and serum ApoM and S1P levels and the importance of ApoM for the preservation for cardiac function during endotoxemia. Some observations that are not necessarily in line with the model proposed should also be discussed.
The authors show that Dapa preserves serum ApoM and cardiac function in LPS-treated obese mice. However, the evidence they provide to suggest that ApoM may be implicated in the protective effect of Dapa on cardiac function is indirect. Direct evidence could be sought by addressing the effect of Dapa on cardiac function in LPS treated ApoM deficient and littermate control mice (with DIO if necessary).
The authors also suggest that higher ApoM levels in mice treated with Dapa and LPS reflect increased megalin-mediated ApoM reabsorption and that this preserves S1PR signaling. This could be addressed more directly by assessing the clearance of labelled ApoM, by addressing the impact of megalin inhibition or deficiency on ApoM clearance in this context, and by measuring S1P as well as ApoM in serum samples.
Methods: More details should be provided in the manuscript for how ApoM deficient and transgenic mice were generated, on sex and strain background, and on whether or not littermate controls were used. For intravital microscopy, more precision is needed on how vessel borders were outland and if this was done with or without regard for FITC-dextran. Please also specify the type of vessel chosen and considerations made with regard to blood flow and patency of the vessels analyzed. For statistical analyses, data from each mouse should be pooled before performing statistical comparisons. The criteria used for choice of test should be outlined as different statistical tests are used for similar datasets. For all data, please be consistent in the use of post-tests and in the presentation of comparisons. In other words, if the authors choose to only display test results for groups that are significantly different, this should be done in all cases. And if comparisons are made between all groups, this should be done in all cases for similar sets of data.
-
Reviewer #3 (Public Review):
The authors have performed well designed experiments that elucidate the protective role of Dapa in sepsis model of LPS. This model shows that Dapa works, in part, by increasing expression of the receptor LRP2 in the kidney, that maintains circulating ApoM levels. ApoM binds to S1P which then interacts with the S1P receptor stimulating cardiac function, epithelial and endothelial barrier function, thereby maintaining intravascular volume and cardiac output in the setting of severe inflammation. The authors used many experimental models, including transgenic mice, as well as several rigorous and reproducible techniques to measure the relevant parameters of cardiac, renal, vascular, and immune function. Furthermore, they employ a useful inhibitor of S1P function to show pharmacologically the essential role for this agonist in most but not all the benefits of Dapa. A strength of the paper is the identification of the pathway responsible for the cardioprotective effects of SGLT2is that may yield additional therapeutic targets. There are some weaknesses in the paper, such as, studying only male mice, as well as providing a power analysis to justify the number of animals used throughout their experimentation. Overall, the paper should have a significant impact on the scientific community because the SGLT2i drugs are likely to find many uses in inflammatory diseases and metabolic diseases. This paper provides support for an important mechanism by which they work in conditions of severe sepsis and hemodynamic compromise.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This study provides a valuable new perspective on how motor learning occurring in one state generalizes to new states (for example, a different limb posture). The proposed model improves upon previous theories in its ability to predict patterns of generalization, but evidence supporting this specific proposed model over possible alternatives is incomplete. The newly proposed theory appears promising but would be more convincing if its conceptual and theoretical basis were clearer and more rigorously derived.
-
Reviewer #1 (Public Review):
This paper proposes a novel framework for explaining patterns of generalization of force field learning to novel limb configurations. The paper considers three potential coordinate systems: cartesian, joint-based, and object-based. The authors propose a model in which the forces predicted under these different coordinate frames are combined according to the expected variability of produced forces. The authors show, across a range of changes in arm configurations, that the generalization of a specific force field is quite well accounted for by the model.
The paper is well-written and the experimental data are very clear. The patterns of generalization exhibited by participants - the key aspect of the behavior that the model seeks to explain - are clear and consistent across participants. The paper clearly illustrates the importance of considering multiple coordinate frames for generalization, building on previous work by Berniker and colleagues (JNeurophys, 2014). The specific model proposed in this paper is parsimonious, but there remain a number of questions about its conceptual premises and the extent to which its predictions improve upon alternative models.
A major concern is with the model's premise. It is loosely inspired by cue integration theory but is really proposed in a fairly ad hoc manner, and not really concretely founded on firm underlying principles. It's by no means clear that the logic from cue integration can be extrapolated to the case of combining different possible patterns of generalization. I think there may in fact be a fundamental problem in treating this control problem as a cue-integration problem. In classic cue integration theory, the various cues are assumed to be independent observations of a single underlying variable. In this generalization setting, however, the different generalization patterns are NOT independent; if one is true, then the others must inevitably not be. For this reason, I don't believe that the proposed model can really be thought of as a normative or rational model (hence why I describe it as 'ad hoc'). That's not to say it may not ultimately be correct, but I think the conceptual justification for the model needs to be laid out much more clearly, rather than simply by alluding to cue-integration theory and using terms like 'reliability' throughout.
A more rational model might be based on Bayesian decision theory. Under such a model, the motor system would select motor commands that minimize some expected loss, averaging over the various possible underlying 'true' coordinate systems in which to generalize. It's not entirely clear without developing the theory a bit exactly how the proposed noise-based theory might deviate from such a Bayesian model. But the paper should more clearly explain the principles/assumptions of the proposed noise-based model and should emphasize how the model parallels (or deviates from) Bayesian-decision-theory-type models.
Another significant weakness is that it's not clear how closely the weighting of the different coordinate frames needs to match the model predictions in order to recover the observed generalization patterns. Given that the weighting for a given movement direction is over-parametrized (i.e. there are 3 variable weights (allowing for decay) predicting a single observed force level, it seems that a broad range of models could generate a reasonable prediction. It would be helpful to compare the predictions using the weighting suggested by the model with the predictions using alternative weightings, e.g. a uniform weighting, or the weighting for a different posture. In fact, Fig. 7 shows that uniform weighting accounts for the data just as well as the noise-based model in which the weighting varies substantially across directions. A more comprehensive analysis comparing the proposed noise-based weightings to alternative weightings would be helpful to more convincingly argue for the specificity of the noise-based predictions being necessary. The analysis in the appendix was not that clearly described, but seemed to compare various potential fitted mixtures of coordinate frames, but did not compare these to the noise-based model predictions.
-
Reviewer #2 (Public Review):
Leib & Franklin assessed how the adaptation of intersegmental dynamics of the arm generalizes to changes in different factors: areas of extrinsic space, limb configurations, and 'object-based' coordinates. Participants reached in many different directions around 360{degree sign}, adapting to velocity-dependent curl fields that varied depending on the reach angle. This learning was measured via the pattern of forces expressed in upon the channel wall of "error clamps" that were randomly sampled from each of these different directions. The authors employed a clever method to predict how this pattern of forces should change if the set of targets was moved around the workspace. Some sets of locations resulted in a large change in joint angles or object-based coordinates, but Cartesian coordinates were always the same. Across three separate experiments, the observed shifts in the generalized force pattern never corresponded to a change that was made relative to any one reference frame. Instead, the authors found that the observed pattern of forces could be explained by a weighted combination of the change in Cartesian, joint, and object-based coordinates across test and training contexts.
In general, I believe the authors make a good argument for this specific mixed weighting of different contexts. I have a few questions that I hope are easily addressed.
Movements show different biases relative to the reach direction. Although very similar across people, this function of biases shifts when the arm is moved around the workspace (Ghilardi, Gordon, and Ghez, 1995). The origin of these biases is thought to arise from several factors that would change across the different test and training workspaces employed here (Vindras & Viviani, 2005). My concern is that the baseline biases in these different contexts are different and that rather the observed change in the force pattern across contexts isn't a function of generalization, but a change in underlying biases. Baseline force channel measurements were taken in the different workspace locations and conditions, so these could be used to show whether such biases are meaningfully affecting the results.
Experiment 3, Test 1 has data that seems the worst fit with the overall story. I thought this might be an issue, but this is also the test set for a potentially awkwardly long arm. My understanding of the object-based coordinate system is that it's primarily a function of the wrist angle, or perceived angle, so I am a little confused why the length of this stick is also different across the conditions instead of just a different angle. Could the length be why this data looks a little odd?
The manuscript is written and organized in a way that focuses heavily on the noise element of the model. Other than it being reasonable to add noise to a model, it's not clear to me that the noise is adding anything specific. It seems like the model makes predictions based on how many specific components have been rotated in the different test conditions. I fear I'm just being dense, but it would be helpful to clarify whether the noise itself (and inverse variance estimation) are critical to why the model weights each reference frame how it does or whether this is just a method for scaling the weight by how much the joints or whatever have changed. It seems clear that this noise model is better than weighting by energy and smoothness.
Are there any force profiles for individual directions that are predicted to change shape substantially across some of these assorted changes in training and test locations (rather than merely being scaled)? If so, this might provide another test of the hypotheses.
I don't believe the decay factor that was used to scale the test functions was specified in the text, although I may have just missed this. It would be a good idea to state what this factor is where relevant in the text.
-
Reviewer #3 (Public Review):
The author proposed the minimum variance principle in the memory representation in addition to two alternative theories of the minimum energy and the maximum smoothness. The strength of this paper is the matching between the prediction data computed from the explicit equation and the behavioral data taken in different conditions. The idea of the weighting of multiple coordinate systems is novel and is also able to reconcile a debate in previous literature.
The weakness is that although each model is based on an optimization principle, but the derivation process is not written in the method section. The authors did not write about how they can derive these weighting factors from these computational principles. Thus, it is not clear whether these weighting factors are relevant to these theories or just hacking methods. Suppose the author argues that this is the result of the minimum variance principle. In that case, the authors should show a process of how to derive these weighting factors as a result of the optimization process to minimize these cost functions.
In addition, I am concerned that the proposed model can cancel the property of the coordinate system by the predicted variance, and it can work for any coordinate system, even one that is not used in the human brain. When the applied force is given in Cartesian coordinates, the directionality in the generalization ability of the memory of the force field is characterized by the kinematic relationship (Jacobian) between the Cartesian coordinate and the coordinate of interest (Cartesian, joint, and object) as shown in Equation 3. At the same time, when a displacement (epsilon) is considered in a space and a corresponding displacement is linked with kinematic equations (e.g., joint displacement and hand displacement in 2 joint arms in this paper), the generated variances in different coordinate systems are linked with the kinematic equation each other (Jacobian). Thus, how a small noise in a certain coordinate system generates the hand force noise (sigma_x, sigma_j, sigma_o) is also characterized by the kinematics (Jacobian). Thus, when the predicted forcefield (F_c, F_j, F_o) was divided by the variance (F_c/sigma_c^2, F_j/sigma_j^2, F_o/sigma_o^2, ), the directionality of the generalization force which is characterized by the Jacobian is canceled by the directionality of the sigmas which is characterized by the Jacobian. Thus, as it has been read out from Fig*D and E top, the weight in E-top of each coordinate system is always the inverse of the shift of force from the test force by which the directionality of the generalization is always canceled. Once this directionality is canceled, no matter how to compute the weighted sum, it can replicate the memorized force. Thus, this model always works to replicate the test force no matter which coordinate system is assumed. Thus, I am suspicious of the falsifiability of this computational model. This model is always true no matter which coordinate system is assumed. Even though they use, for instance, the robot coordinate system, which is directly linked to the participant's hand with the kinematic equation (Jacobian), they can replicate this result. But in this case, the model would be nonsense. The falsifiability of this model was not explicitly written.
-
-
www.medrxiv.org www.medrxiv.org
-
eLife Assessment
This important work advances our understanding of factors influencing early childhood development. The large sample size and methodology applied make the findings of this study convincing; however, support for some of the claims made by the authors is incomplete. The work will be of interest to researchers in developmental science and early childhood pediatrics.
-
Reviewer #1 (Public Review):
Padilha et al. aimed to find prospective metabolite biomarkers in serum of children aged 6-59 months that were indicative of neurodevelopmental outcomes. The authors leveraged data and samples from the cross-sectional Brazilian National Survey on Child Nutrition (ENANI-2019), and an untargeted multisegment injection-capillary electrophoresis-mass spectrometry (MSI-CE-MS) approach was used to measure metabolites in serum samples (n=5004) which were identified via a large library of standards. After correlating the metabolite levels against the developmental quotient (DQ), or the degree of which age-appropriate developmental milestones were achieved as evaluated by the Survey of Well-being of Young Children, serum concentrations of phenylacetylglutamine (PAG), cresol sulfate (CS), hippuric acid (HA) and trimethylamine-N-oxide (TMAO) were significantly negatively associated with DQ. Examination of the covariates revealed that the negative associations of PAG, HA, TMAO and valine (Val) with DQ were specific to younger children (-1 SD or 19 months old), whereas creatinine (Crtn) and methylhistidine (MeHis) had significant associations with DQ that changed direction with age (negative at -1 SD or 19 months old, and positive at +1 SD or 49 months old). Further, mediation analysis demonstrated that PAG was a significant mediator for the relationship of delivery mode, child's diet quality and child fiber intake with DQ. HA and TMAO were additional significant mediators of the relationship of child fiber intake with DQ.
Strengths of this study include the large cohort size and study design allowing for sampling at multiple time points along with neurodevelopmental assessment and a relatively detailed collection of potential confounding factors including diet. The untargeted metabolomics approach was also robust and comprehensive allowing for level 1 identification of a wide breadth of potential biomarkers. Given their methodology, the authors should be able to achieve their aim of identifying candidate serum biomarkers of neurodevelopment for early childhood. The results of this work would be of broad interest to researchers who are interested in understanding the biological underpinnings of development and also for tracking development in pediatric populations, as it provides insight for putative mechanisms and targets from a relevant human cohort that can be probed in future studies. Such putative mechanisms and targets are currently lacking in the field due to challenges in conducting these kind of studies, so this work is important.
However, in the manuscript's current state, the presentation and analysis of data impede the reader from fully understanding and interpreting the study's findings. Particularly, the handling of confounding variables is incomplete. There is a different set of confounders listed in Table 1 versus Supplementary Table 1 versus Methods section Covariates versus Figure 4. For example, Region is listed in Supplementary Table 1 but not in Table 1, and Mode of Delivery is listed in Table 1 but not in Supplementary Table 1. Many factors are listed in Figure 4 that aren't mentioned anywhere else in the paper, such as gestational age at birth or maternal pre-pregnancy obesity.
The authors utilize the directed acrylic graph (DAG) in Figure 4 to justify the further investigation of certain covariates over others. However, the lack of inclusion of the microbiome in the DAG, especially considering that most of the study findings were microbial-derived metabolite biomarkers, appears to be a fundamental flaw. Sanitation and micronutrients are proposed by the authors to have no effect on the host metabolome, yet sanitation and micronutrients have both been demonstrated in the literature to affect microbiome composition which can in turn affect the host metabolome.
Additionally, the authors emphasized as part of the study selection criteria the following,<br /> "Due to the costs involved in the metabolome analysis, it was necessary to further reduce the sample size. Then, samples were stratified by age groups (6 to 11, 12 to 23, and 24 to 59 months) and health conditions related to iron metabolism, such as anemia and nutrient deficiencies. The selection process aimed to represent diverse health statuses, including those with no conditions, with specific deficiencies, or with combinations of conditions. Ultimately, through a randomized process that ensured a balanced representation across these groups, a total of 5,004 children were selected for the final sample (Figure 1)."
Therefore, anemia and nutrient deficiencies are assumed by the reader to be important covariates, yet, the data on the final distribution of these covariates in the study cohort is not presented, nor are these covariates examined further.
The inclusion of specific covariates in Table 1, Supplementary Table 1, the statistical models, and the mediation analysis is thus currently biased as it is not well justified.
Finally, it is unclear what the partial-least squares regression adds to the paper, other than to discard potentially interesting metabolites found by the initial correlation analysis.
-
Reviewer #2 (Public Review):
A strength of the work lies in the number of children Padilha et al. were able to assess (5,004 children aged 6-59 months) and in the extensive screening that the Authors performed for each participant. This type of large-scale study is uncommon in low-to-middle-income countries such as Brazil.<br /> The Authors employ several approaches to narrow down the number of potentially causally associated metabolites.<br /> Could the Authors justify on what basis the minimum dietary diversity score was dichotomized? Were sensitivity analyses undertaken to assess the effect of this dichotomization on associations reported by the article? Consumption of each food group may have a differential effect that is obscured by this dichotomization.<br /> Could the Authors specify the statistical power associated with each analysis?<br /> Could the Authors describe in detail which metric they used to measure how predictive PLSR models are, and how they determined what the "optimal" number of components were?<br /> The Authors use directed acyclic graphs (DAG) to identify confounding variables of the association between metabolites and DQ. Could the dataset generated by the Authors have been used instead? Not all confounding variables identified in the literature may be relevant to the dataset generated by the Authors.<br /> Were the systematic reviews or meta-analyses used in the DAG performed by the Authors, or were they based on previous studies? If so, more information about the methodology employed and the studies included should be provided by the Authors.<br /> Approximately 72% of children included in the analyses lived in households with a monthly income superior to the Brazilian minimum wage. The cohort is also biased towards households with a higher level of education. Both of these measures correlate with developmental quotient. Could the Authors discuss how this may have affected their results and how generalizable they are?<br /> Further to this, could the Authors describe how inequalities in access to care in the Brazilian population may have affected their results? Could they have included a measure of this possible discrepancy in their analyses?<br /> The Authors state that the results of their study may be used to track children at risk for developmental delays. Could they discuss the potential for influencing policies and guidelines to address delayed development due to malnutrition and/or limited access to certain essential foods?
-
Reviewer #3 (Public Review):
The ENANI-2019 study provides valuable insights into child nutrition, development, and metabolomics in Brazil, highlighting both challenges and opportunities for improving child health outcomes through targeted interventions and further research.
Strengths of the methods and results:<br /> (1) The study utilizes data from the ENANI-2019 cohort, which was already existing. This cohort choice allows for longitudinal assessments and exploration of associations between metabolites and developmental outcomes. In addition, there was conservation of resources which are scanty in all settings in the current scenario.<br /> (2) The study aims to investigate the relationship between circulating metabolites (exposure) and early childhood development (outcome), specifically developmental quotient (DQ). The objectives are clearly stated, which facilitates focused research questions and hypotheses. The population that is studied is clearly mentioned.<br /> (3) The study accessed a large number of children under five years, with blood collected from a final sample size of 5,004 children. The exclusion of infants under six months due to venipuncture challenges and lack of reference values highlights practical considerations in research design.<br /> The study sample reflects a diverse range of children in terms of age, sex distribution, weight status, maternal education, and monthly family income. This diversity enhances the generalizability of findings across different sociodemographic groups within Brazil.<br /> (4) The study uses standardized measures (e.g., DQ assessments) and chronological age. Confounding variables, such as child's age, diet quality, and nutritional status, are carefully considered and incorporated into analyses through a Directed Acyclic Graph (DAG). The mean DQ of 0.98 indicates overall developmental norms among the studied children, with variations noted across different demographic factors such as age, region, and maternal education. The prevalence of Minimum Dietary Diversity (MDD) being met by 59.3% of children underscores dietary patterns and their potential impact on health outcomes. The association between nutritional status (weight-for-height z-scores) and developmental outcomes (DQ) provides insights into the interplay between nutrition and child development.<br /> The study identified key metabolites associated with developmental quotient (DQ):<br /> Component 1: Branched-chain amino acids (Leucine, Isoleucine, Valine).<br /> Component 2: Uremic toxins (Cresol sulfate, Phenylacetylglutamine).<br /> Component 3: Betaine and amino acids (Glutamine, Asparagine).<br /> The study focused on several serum metabolites like PAG (phenylacetylglutamine), CS (p-cresyl sulfate), HA (hippuric acid), TMAO (trimethylamine-N-oxide), MeHis (methylhistidine), and Crtn (creatinine). These metabolites are implicated in various metabolic pathways linked to gut microbiota activity, amino acid metabolism, and dietary factors.<br /> These metabolites explained a significant portion of both metabolite variance (39.8%) and DQ variance (4.3%). The study suggests that these metabolites can be used as proxy measures of the gut microbiome in children.<br /> (5) The use of partial least square regression (PLSR) with cross-validation (80% training, 20% testing) which is a robust approach to identify metabolites predictive of DQ, which minimizes overfitting. This model allows for outliers to remain outliers for transparency.<br /> The Directed Acyclic Graph (DAG) identifies and adjusts for confounding variables (e.g., child's diet quality, nutritional status) and strengthens the validity of findings by controlling for potential biases. Developmental and gender differences were studied by testing interactions with the age of the child and the sex.<br /> Mediation analysis exploring metabolites as potential mediators provides insights into underlying pathways linking exposures (e.g., diet, microbiome) with DQ.<br /> The use of Benjamini-Hochberg correction for multiple comparisons and bootstrap tests (5,000 iterations) enhances the reliability of results by controlling false discovery rates and assessing significance robustly.
Significant correlations between serum metabolites and DQ, particularly negative associations with certain metabolites like PAG and CS, suggest potential biomarkers or pathways influencing developmental outcomes. Notably, these associations varied with age, suggesting different metabolic impacts during early childhood development.
Weaknesses:<br /> (1) The data collected was incomplete especially those related to breastfeeding history and birth weight. These have been mentioned in the limitations of the study but yet might have been potential confounders or even factors leading to the particular identified metabolite state of the population.<br /> (2) Other tests than mediation analysis might have been used to ensure reliability and robustness of the data. How data was processed, data cleaning methods, how outliers were handled and sensitivity analyses would ensure robustness of the findings.<br /> (3) The generalizability of the data is not sound especially considering the children mostly belonged to a higher socioeconomic group in Brazil with mother or caregiver education being above a certain level. Comparative studies with children from other socio-economic groups and other cohorts might have been useful. Consideration of sample size adequacy and power analysis might have helped in generalizing the findings.<br /> (4) Caution is needed in interpreting causality from this data because of the nature of the study design Discussing alternative explanations and potential confounding factors in more depth could strengthen the conclusions.
Appraisal<br /> (1) The aims of the study were to identify associations between children's serum metabolome and Early Childhood development. This aim was met. The results do confirm their conclusions.<br /> Impact of the work on the field
(1) Unless actual gut microbiome of children in this age group from gut bacteria examination or gastrointestinal examination of the gut of children, the causality of gut metabolome on early childhood development cannot be established with certainty. Because this may not be possible in every situation, proxy methods such as the one elucidated here might be useful, considering the risk-benefit ratio.<br /> (2) More research is needed on this theme through longitudinal studies to validate these findings and explore underlying pathways involving gut-brain interactions and metabolic dysregulation.<br /> Other readings: Readers are advised to read other research from other countries and other languages to understand the connection between gut microbiome, metabolite spectra, and child development. In addition to study the effect of these factors on child mental development too.
Readers might consider the following questions:<br /> (1) Should investigators study the families through direct observation of diet and other factors to look for a connection between food taken in and gut microbiome and child development?<br /> (2) Can an examination of the mother's gut microbiome influence the child's microbiome? Can the mother or caregiver's microbiome influence early childhood development?<br /> (3) Is developmental quotient enough to study early childhood development? Is it comprehensive enough?
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important work addresses the role of Marcks/Markcksl during spinal cord development and regeneration. The study is exceptional in combining molecular approaches to understand the mechanisms of tissue regeneration with behavioural assays, which is not commonly employed in the field. The data presented is convincing and comprehensive, using many complementary methodologies.
-
Reviewer #1 (Public Review):
In this manuscript, El Amri et al. are exploring the role of Marcks and Marcksl1 proteins during spinal cord development and regeneration in Xenopus. Using two different techniques to knockdown their expressions, they argue that these proteins are important for neural progenitors proliferation and neurites outgrowth in both contexts. Finally, using a pharmalogical approach, they suggest that Marcks and Marcksl1 work by modulating the activity of PLD and the levels of PIP2 whilst PKC could modulate Marcks activity.<br /> The strength of this manuscript resides in the ability of the authors to knockdown the expression of 4 different genes using 2 different methods to assess the role of this protein family during early development and regeneration at the late tadpole stage. This has always been a limiting factor in the field as the tools to perform conditional knockouts in Xenopus are very limited. However, this will not really be applicable to essential genes as it relies on the general knockdown of protein expression. The generation of antibodies able to detect endogenous Marcks/Marcksl1 is also a powerful tool to assess the extent to which the expression of these proteins is down-regulated.<br /> Whilst there is a great amount of data provided in this manuscript and there is strong evidence to show that Marcks are important for spinal cord development and regeneration, their roles in both contexts is not explored fully. The description of the effect of knocking down Marcks/Marcksl1 on neurons and progenitors is rather superficial and the evidence for the underlying mechanism underpinning their roles is not very convincing.
-
Reviewer #2 (Public Review):
M. El Amri et al., investigated the functions of Marcks and Marcks like 1 during spinal cord (SC) development and regeneration in Xenopus laevis. The authors rigorously performed loss of function with morpholino knock-down and CRISPR knock-out combining rescue experiments in developing spinal cord in embryo and regeneration in tadpole stage.
For the assays in the developing spinal cord, a unilateral approach (knock-down/out only one side of the embryo) allowed the authors to assess the gene functions by direct comparing one-side (e.g. mutated SC) to the other (e.g. wild type SC on the other side). For the assays in regenerating SC, the authors microinject CRISPR reagents into 1-cell stage embryo. When the embryo (F0 crispants) grew up to tadpole (stage 50), the SC was transected. They then assessed neurite outgrowth and progenitor cell proliferation. The validation of the phenotypes was mostly based on the quantification of immunostaining images (neurite outgrowth: acetylated tubulin, neural progenitor: sox2, sox3, proliferation: EdU, PH3), that are simple but robust enough to support their conclusions. In both SC development and regeneration, the authors found that Marcks and Marcksl1 were necessary for neurite outgrowth and neural progenitor cell proliferation.<br /> The authors performed rescue experiments on morpholino knock-down and CRISPR knock-out conditions by Marcks and Marcksl1 mRNA injection for SC development and pharmacological treatments for SC development and regeneration. The unilateral mRNA injection rescued the loss-of-function phenotype in the developing SC. To explore the signalling role of these molecules, they rescued the loss-of-function animals by pharmacological reagents They used S1P: PLD activator, FIPI: PLD inhibitor, NMI: PIP2 synthesis activator and ISA-2011B: PIP2 synthesis inhibitor. The authors found the activator treatment rescued neurite outgrowth and progenitor cell proliferation in loss of function conditions. From these results, the authors proposed PIP2 and PLD are the mediators of Marcks and Marcksl1 for neurite outgrowth and progenitor cell proliferation during SC development and regeneration. The results of the rescue experiments are particularly important to assess gene functions in loss of function assays, therefore, the conclusions are solid. In addition, they performed gain-of-function assays by unilateral Marcks or Marcksl1 mRNA injection showing that the injected side of the SC had more neurite outgrowth and proliferative progenitors. The conclusions are consistent with the loss-of-function phenotypes and the rescue results. Importantly, the authors showed the linkage of the phenotype and functional recovery by behavioral testing, that clearly showed the crispants with SC injury swam less distance than wild types with SC injury at 10-day post surgery.<br /> Prior to the functional assays, the authors analyzed the expression pattern of the genes by in situ hybridization and immunostaining in developing embryo and regenerating SC. They confirmed that the amount of protein expression was significantly reduced in the loss of function samples by immunostaining with the specific antibodies that they made for Marcks and Marcksl1. Although the expression patterns are mostly known in previous works during embryo genesis, the data provided appropriate information to readers about the expression and showed efficiency of the knock-out as well.
MARCKS family genes have been known to be expressed in the nervous system. However, few studies focus on the function in nerves. This research introduced these genes as new players during SC development and regeneration. These findings could attract broader interests from the people in nervous disease model and medical field. Although it is a typical requirement for loss of function assays in Xenopus laevis, I believe that the efficient knock-out for four genes by CRISPR/Cas9 was derived from their dedication of designing, testing and validation of the gRNAs and is exemplary.
Weaknesses,<br /> 1) Why did the authors choose Marcks and Marcksl1?<br /> The authors mentioned that these genes were identified with a recent proteomic analysis of comparing SC regenerative tadpole and non-regenerative froglet (Line (L) 54-57). However, although it seems the proteomic analysis was their own dataset, the authors did not mention any details to select promising genes for the functional assays (this article). In the proteomic analysis, there must be other candidate genes that might be more likely factors related to SC development and regeneration based on previous studies, but it was unclear what the criteria to select Marcks and Marcksl1 was.
2) Gene knock-out experiments with F0 crispants,<br /> The authors described that they designed and tested 18 sgRNAs to find the most efficient and consistent gRNA (L191-195). However, it cannot guarantee the same phenotypes practically, due to, for example, different injection timing, different strains of Xenopus laevis, etc. Although the authors mentioned the concerns of mosaicism by themselves (L180-181, L289-292) and immunostaining results nicely showed uniformly reduced Marcks and Marcksl1 expression in the crispants, they did not refer to this issue explicitly.
3) Limitations of pharmacological compound rescue<br /> In the methods part, the authors describe that they performed titration experiments for the drugs (L702-704), that is a minimal requirement for this type of assay. However, it is known that a well characterized drug is applied, if it is used in different concentrations, the drug could target different molecules (Gujral TS et al., 2014 PNAS). Therefore, it is difficult to eliminate possibilities of side effects and off targets by testing only a few compounds.
-
Reviewer #3 (Public Review):
El Amri et al conducted an analysis on the function of marcks and marcksl in Xenopus spinal cord development and regeneration. Their study revealed these proteins are crucial for neurite outgrowth and cell proliferation, including Sox2+ progenitors. Furthermore, they suggested these genes may act through the PLD pathway. The study is well-executed with appropriate controls and validation experiments, distinguishing it from typical regeneration research by including behavioral assays. The manuscript is commendable for its quantifications, literature referencing, careful conclusions, and detailed methods. Conclusions are well-supported by the experiments performed in this study. Overall, this manuscript contributes to the field of spinal cord regeneration and sets a good example for future research in this area.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to learn how to install the Docker engine inside an EC2 instance and then use that to create a Docker image.
Now this Docker image is going to be running a simple application and we'll be using this Docker image later in this section of the course to demonstrate the Elastic Container service.
So this is going to be a really useful demo where you're going to gain the experience of how to create a Docker image.
Now there are a few things that you need to do before we get started.
First as always make sure that you're logged in to the I am admin user of the general AWS account and you'll also need the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link so go ahead and click that now.
This is going to deploy an EC2 instance with some files pre downloaded that you'll use during the demo lesson.
Now everything's pre-configured you just need to check this box at the bottom and click on create stack.
Now that's going to take a few minutes to create and we need this to be in a create complete state.
So go ahead and pause the video wait for your stack to move into create complete and then we're good to continue.
So now this stack is in a create complete state and we're good to continue.
Now if you're following along with this demo within your own environment there's another link attached to this lesson called the lesson commands document and that will include all of the commands that you'll need to type as you move through the demo.
Now I'm a fan of typing all commands in manually because I personally think that it helps you learn but if you are the type of person who has a habit of making mistakes when typing along commands out then you can copy and paste from this document to avoid any typos.
Now one final thing before we finish at the end of this demo lesson you'll have the opportunity to upload the Docker image that you create to Docker Hub.
If you're going to do that then you should pre sign up for a Docker Hub account if you don't already have one and the link for this is included attached to this lesson.
If you already have a Docker Hub account then you're good to continue.
Now at this point what we need to do is to click on the resources tab of this stack and locate the public EC2 resource.
Now this is a normal EC2 instance that's been provisioned on your behalf and it has some files which have been pre downloaded to it.
So just go ahead and click on the physical ID next to public EC2 and that will move you to the EC2 console.
Now this machine is set up and ready to connect to and I've configured it so that we can connect to it using Session Manager and this avoids the need to use SSH keys.
So to do that just right-click and then select connect.
You need to pick Session Manager from the tabs across the top here and then just click on connect.
Now that will take a few minutes but once connected you should see this prompt.
So it should say SH- and then a version number and then dollar.
Now the first thing that we need to do as part of this demo lesson is to install the Docker engine.
The Docker engine is the thing that allows Docker containers to run on this EC2 instance.
So we need to install the Docker engine package and we'll do that using this command.
So we're using shudu to get admin permissions then the package manager DNF then install then Docker.
So go ahead and run that and that will begin the installation of Docker.
It might take a few moments to complete it might have to download some prerequisites and you might have to answer that you're okay with the install.
So press Y for yes and then press enter.
Now we need to wait a few moments for this install process to complete and once it has completed then we need to start the Docker service and we do that using this command.
So shudu again to get admin permissions and then service and then the Docker service and then start.
So type that and press enter and that starts the Docker service.
Now I'm going to type clear and then press enter to make this easier to see and now we need to test that we can interact with the Docker engine.
So the most simple way to do that is to type Docker space and then PS and press enter.
Now you're going to get an error.
This error is because not every user of this EC2 instance has the permissions to interact with the Docker engine.
We need to grant permissions for this user or any other users of this EC2 instance to be able to interact with the Docker engine and we're going to do that by adding these users to a group and we do that using this command.
So shudu for admin permissions and then user mod -a -g for group and then the Docker group and then EC2 -user.
Now that will allow a local user of this system, specifically EC2 -user, to be able to interact with the Docker engine.
Okay so I've cleared the screen to make it slightly easier to see now that we've added EC2 -user the ability to interact with Docker.
So the next thing is we need to log out and log back in of this instance.
So I'm going to go ahead and type exit just to disconnect from session manager and then click on close and then I'm going to reconnect to this instance and you need to do the same.
So connect back in to this EC2 instance.
Now once you're connected back into this EC2 instance we need to run another command which moves us into EC2 user so it basically logs us in as EC2 -user.
So that's this command and the result of this would be the same as if you directly logged in to EC2 -user.
Now the reason we're doing it this way is because we're using session manager so that we don't need a local SSH client or to worry about SSH keys.
We can directly log in via the console UI we just then need to switch to EC2 -user.
So run this command and press enter and we're now logged into the instance using EC2 -user and to test everything's okay we need to use a command with the Docker engine and that command is Docker space ps and if everything's okay you shouldn't see any output beyond this list of headers.
What we've essentially done is told the Docker engine to give us a list of any running containers and even though we don't have any it's not erred it's simply displayed this empty list and that means everything's okay.
So good job.
Now what I've done to speed things up if you just run an LS and press enter the instance has been configured to download the sample application that we're going to be using and that's what the file container.zip is within this folder.
I've configured the instance to automatically extract that zip file which has created the folder container.
So at this point I want you to go ahead and type cd space container and press enter and that's going to move you inside this container folder.
Then I want you to clear the screen by typing clear and press enter and then type ls space -l and press enter.
Now this is the web application which I've configured to be automatically downloaded to the EC2 instance.
It's a simple web page we've got index.html which is the index we have a number of images which this index.html contains and then we have a docker file.
Now this docker file is the thing that the docker engine will use to create our docker image.
I want to spend a couple of moments just stepping you through exactly what's within this docker file.
So I'm going to move across to my text editor and this is the docker file that's been automatically downloaded to your EC2 instance.
Each of these lines is a directive to the docker engine to perform a specific task and remember we're using this to create a docker image.
This first line tells the docker engine that we want to use version 8 of the Red Hat Universal base image as the base component for our docker image.
This next line sets the maintainer label it's essentially a brief description of what the image is and who's maintaining it in this case it's just a placeholder of animals for life.
This next line runs a command specifically the yum command to install some software specifically the Apache web server.
This next command copy copies files from the local directory when you use the docker command to create an image so it's copying that index.html file from this local folder that I've just been talking about and it's going to put it inside the docker image in this path so it's going to copy index.html to /var/www/html and this is where an Apache web server expects this index.html to be located.
This next command is going to do the same process for all of the jpegs in this folder so we've got a total of six jpegs and they're going to be copied into this folder inside the docker image.
This line sets the entry point and this essentially determines what is first run when this docker image is used to create a docker container.
In this example it's going to run the Apache web server and finally this expose command can be used for a docker image to tell the docker engine which services should be exposed.
Now this doesn't actually perform any configuration it simply tells the docker engine what port is exposed in this case port 80 which is HTTP.
Now this docker file is going to be used when we run the next command which is to create a docker image.
So essentially this file is the same docker file that's been downloaded to your EC2 instance and that's what we're going to run next.
So this is the next command within the lesson commands document and this command builds a container image.
What we're essentially doing is giving it the location of the docker file.
This dot at the end contains the working directory so it's here where we're going to find the docker file and any associated files that that docker file uses.
So we're going to run this command and this is going to create our docker image.
So let's go ahead and run this command.
It's going to download version 8 of UBI which it will use as a starting point and then it's going to run through every line in the docker file performing each of the directives and each of those directives is going to create another layer within the docker image.
Remember from the theory lesson each line within the docker file generally creates a new file system layer so a new layer of a docker image and that's how docker images are efficient because you can reuse those layers.
Now in this case this has been successful.
We've successfully built a docker image with this ID so it's giving it a unique ID and it's tagged this docker image with this tag colon latest.
So this means that we have a docker image that's now stored on this EC2 instance.
Now I'll go ahead and clear the screen to make it easier to see and let's go ahead and run the next command which is within the lesson commands document and this is going to show us a list of images that are on this EC2 instance but we're going to filter based on the name container of cats and this will show us the docker image which we've just created.
So the next thing that we need to do is to use the docker run command which is going to take the image that we've just created and use it to create a running container and it's that container that we're going to be able to interact with.
So this is the command that we're going to use it's the next one within the lesson commands document.
It's docker run and then it's telling it to map port 80 on the container with port 80 on the EC2 instance and it's telling it to use the container of cats image and if we run that command docker is going to take the docker image that we've got on this EC2 instance run it to create a running container and we should be able to interact with that container.
So if you go back to the AWS console if we click on instances so look for a4l-public EC2 that's in the running state.
I'm just going to go ahead and select this instance so that we can see the information and we need the public IP address of this instance.
Go ahead and click on this icon to copy the public IP address into your clipboard and then open that in a new tab.
Now be sure not to use this link to the right because that's got a tendency to open the HTTPS version.
We just need to use the IP address directly.
So copy that into your clipboard open a new tab and then open that IP address and now we can see the amazing application if it fits i sits in a container in a container and this amazing looking enterprise application is what's contained in the docker image that you just created and it's now running inside a container based off that image.
So that's great everything's working as expected and that's running locally on the EC2 instance.
Now in the demo lesson for the elastic container service that's coming up later in this section of the course you have two options.
You can either use my docker image which is this image that I've just created or you can use your own docker image.
If you're going to use my docker image then you can skip this next step.
You don't need a docker hub account and you don't need to upload your image.
If you want to use your own image then you do need to follow these next few steps and I need to follow them anyway because I need to upload this image to docker hub so that you can potentially use it rather than your own image.
So I'm going to move back to the session manager tab and I'm going to control C to exit out of this running container and I'm going to type clear to clear the screen and make it easier to see.
Now to upload this to docker hub first you need to log in to docker hub using your credentials and you can do that using this command.
So it's docker space login space double hyphen username equals and then your username.
So if you're doing this in your own environment you need to delete this placeholder and type your username.
I'm going to type my username because I'll be uploading this image to my docker hub.
So this is my docker hub username and then press enter and it's going to ask for the corresponding password to this username.
So I'm going to paste in my password if you're logging into your docker hub you should use your password.
Once you've pasted in the password go ahead and press enter and that will log you in to docker hub.
Now you don't have to worry about the security message because whilst your docker hub password is going to be stored on the EC2 instance shortly we're going to terminate this instance which will remove all traces of this password from this machine.
Okay so again we're going to upload our docker image to docker hub so let's run this command again and you'll see because we're just using the docker images command we can see the base image as well as our image.
So we can see red hat UBI 8.
We want the container of cats latest though so what you need to do is copy down the image ID of the container of cats image.
So this is the top line in my case container of cats latest and then the image ID.
So then we need to run this command so docker space tag and then the image ID that you've just copied into your clipboard and then a space and then your docker hub username.
In my case it's actrl with 1L if you're following along you need to use your own username and then forward slash and then the name of the image that you want this to be stored as on docker hub so I'm going to use container of cats.
So that's the command you need to use so docker tag and then your image ID for container of cats and then your username forward slash container of cats and press enter and that's everything we need to do to prepare to upload this image to docker hub.
So the last command that we need to run is the command to actually upload the image to docker hub and that command is docker space push so we're going to push the image to docker hub then we need to specify the docker hub username so again this is my username but if you're doing this in your environment it needs to be your username and then forward slash and then the image name in my case container of cats and then colon latest and once you've got all that go ahead and press enter and that's going to push the docker image that you've just created up to your docker hub account and once it's up there it means that we can deploy from that docker image to other EC2 instances and even ECS and we're going to do that in a later demo in this section of the course.
Now that's everything that you need to do in this demo lesson you've essentially installed and configured the docker engine you've used a docker file to create a docker image from some local assets you've tested that docker image by running a container using that image and then you've uploaded that image to docker hub and as I mentioned before we're going to use that in a future demo lesson in this section of the course.
Now the only thing that remains to do is to clear up the infrastructure that we've used in this demo lesson so go ahead and close down all of these extra tabs and go back to the cloud formation console this is the stack that's been created by the one click deployment link so all you need to do is select this stack it should be called EC2 docker and then click on delete and confirm that deletion and that will return the account into the same state as it was at the start of this demo lesson.
Now that is everything you need to do in this demo lesson I hope it's been useful and I hope you've enjoyed it so go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Additionally, spam and output from Large Language Models like ChatGPT can flood information spaces (e.g., email, Wikipedia) with nonsense, useless, or false content, making them hard to use or useless.
That is a very valid concern. AI-generated content, such as from ChatGPT, tends to spam online platforms like email and Wikipedia with misinformation, making people not trust the platforms. Because Wikipedia, for example, enables users to edit entries, it is highly susceptible to the addition of false information. There are systems in place for moderation, but it's tough to keep up with how quickly AI can generate content. It requires stronger editorial controls and awareness on the part of users to maintain the reliability of such platforms.
-
Then Sean Black, a programmer on TikTok saw this and decided to contribute by creating a bot that would automatically log in and fill out applications with random user info, increasing the rate at which he (and others who used his code) could spam the Kellogg’s job applications:
This is a great example of using social media for the right cause and explaining how the context matters. It shows that ethical trolling can be done to get social justice for those who have been wronged, forcing such a big company to act right. It's interesting to see how the company's decision backfired using trolling.
-
-
pressbooks.lib.jmu.edu pressbooks.lib.jmu.eduWork3
-
Does anyone know the original Italian word for "work"?
-
What does the Italian word "work" convey in Montessori's time?
-
[MAPS 2024 conversation] Italian translations of the term "work": * "meaningful activity" * "play" (lavora) ... i.e., "meaningful play Context: English translations of Montessori's original writing. Italian has different meanings than English translations Historical context matters as it relates to the meaning of terms.
-
-
www.nytimes.com www.nytimes.com
Tags
- regeneration failure
- Brendan Byrne
- Marc-André Parisien
- Natural Resources Canada
- Carbon emissions from the 2023 Canadian wildfires
- Kanada
- increasing risk of wildfires
- by: Manuela Andreoni
- flash droughts
- Drivers and Impacts of the Record-Breaking 2023 Wildfire Season in Canada
- Ellen Whitman
Annotators
URL
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social Media platforms use the data they collect on users and infer about users to increase their power and increase their profits.
I completely agree with this. As TikTok gained popularity with its short videos, many other platforms quickly adopted this feature for creating and sharing short-form content. Instagram introduced Reels, and YouTube launched Shorts, both experiencing significant growth as a result. Even Spotify has now incorporated a similar short video format.
-
One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.
I like the algorithm social media platforms use because it shows me content that I like to see. I have always wondered how do social media sites make money from the ads, anytime I get an ad on any platform I always skip them if I can.
-
-
pierce.instructure.com pierce.instructure.com
-
The Tuskegee Experiment based on information presented in different genres
-
-
www.theatlantic.com www.theatlantic.com
-
In psychology, the belief that only conservatives can be authoritarians, and that therefore only conservative authoritarians warrant serious study, has proved self-reinforcing over the course of decades.
!
-
Intriguingly, the researchers found some common traits between left-wing and right-wing authoritarians, including a “preference for social uniformity, prejudice towards different others, willingness to wield group authority to coerce behavior, cognitive rigidity, aggression and punitiveness towards perceived enemies, outsized concern for hierarchy, and moral absolutism.”
!
-
But one reason left-wing authoritarianism barely shows up in social-psychology research is that most academic experts in the field are based at institutions where prevailing attitudes are far to the left of society as a whole. Scholars who personally support the left’s social vision—such as redistributing income, countering racism, and more—may simply be slow to identify authoritarianism among people with similar goals.
!
-
-
www.biorxiv.org www.biorxiv.org
-
Overall Assessment (4/5)
Summary: The authors provide a software tool NeuroVar that helps visualizing genetic variations and gene expression profiles of biomarkers in different neurological diseases.
Technical Release criteria
Is the language of sufficient quality? * The language quality of the document is of sufficient quality. I did not notice any major issues.
Is there a clear statement of need explaining what problems the software is designed to solve and who the target audience is? * Yes, authors provide a statement of need. Authors mention that there is the need for a specialized software tool to identify genes from transcriptomic data and genetic variations such as SNPs, specifically for neurological diseases. Perhaps authors could expand on how they chose the diseases. E.g. stroke is not listed among the neurological diseases. Perhaps authors could expand a bit on the diseases they chose in the introduction.
Is the source code available, and has an appropriate Open Source Initiative license been assigned to the code? * Yes the source code is available in github under the following link: https://github.com/omicscodeathon/neurovar. Additionally authors deposited the source code and additional supplementary data in a permanent depository with zenodo under the following DOI: https://zenodo.org/records/13375493. They also provided test data https://zenodo.org/records/13375591. I was able to download and access the complete set of data
As Open Source Software are there guidelines on how to contribute, report issues or seek support on the code? * I did not find any way to contribute, report issues or seek support. I would recommend that the authors add this information to the Github README file.
Is the code executable? * Yes, I could execute the code using Rstudio 4.3.3
Is the documentation provided clear and user friendly? * The documentation is provided and is user friendly. I was able to install, test and run the tool using RStudio. Authors may consider to offer also a simple website link for the RshinyTools if possible. This may enable the access also for scientists that are not familiar with R.Especially, it is great that authors provided a demonstration video. I was able to reproduce the steps. However, I would recommend to add more information into the Youtube video. E.g. reference to the preprint/ paper and Github link would be helpful to connect the data. Perhaps authors could also expand a bit on the possibilities to export data from their software. And provide different formats e.g., PDF / PNG /JPEG. I think this is important for many researchers to export their outputs e.g., from the heatmaps.
Is installation/deployment sufficiently outlined in the paper and documentation, and does it proceed as outlined? * I could follow the installation process, but perhaps authors could add few more details how to download from Github in more detail. As some scientist may have trouble with it. Also perhaps an installation video (additionally to the video demonstration of the Neurovar Shiny App might be helpful.·
Is there a clearly-stated list of dependencies, and is the core functionality of the software documented to a satisfactory level? * Yes, dependencies are listed and are installed automatically. It worked for me with Rstudio version 4.3.3. In the manuscript and in the
Have any claims of performance been sufficiently tested and compared to other commonly-used packages? * not applicable
Are there (ideally real world) examples demonstrating use of the software? * Yes, authors use the example of Epilepsy, focal epilepsy and the gene of interest DEPDC5. I replicated their search and got the same results. However, I find that the label in Figure 1 in the gene’s transcript could be a bit more clear. E.g. it is not clear to me what transcript start and end refers to. It might also be more helpful if authors provide an example dataset for the Expression data that is loaded in the software by default.Furthermore authors use a case study results using RNAseq in ALS patients with mutations in FUS, TARDBP, SOD1, VCP genes.
Is test data available, either included with the submission or openly available via cited third party sources (e.g. accession numbers, data DOIs, etc.)? * Yes the authors provide test data with dois: https://zenodo.org/records/13375591.
Is automated testing used or are there manual steps described so that the functionality of the software can be verified? * Automated testing is not used as far as I can access it.
Overall Recommendation: * Accept with revisions
Reviewer Information: Ruslan Rust is an assistant professor in neuroscience and physiology at University of Southern California working on stem cell therapies on stroke. His lab is particularly interested in working with genomic data and the development of new biomarkers for stroke, AD and other neurological diseases.
Dr. Ruslan Rust's profile on ResearchHub: https://www.researchhub.com/author/4945925
ResearchHub Peer Reviewer Statement: This peer review has been uploaded from ResearchHub as part of a paid peer review initiative. ResearchHub aims to accelerate the pace of scientific research using novel incentive structures.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this very brief demo lesson, I just want to demonstrate a very specific feature of EC2 known as termination protection.
Now you don't have to follow along with this in your own environment, but if you are, you should still have the infrastructure created from the previous demo lesson.
And also if you are following along, you need to be logged in as the I am admin user to the general AWS account.
So the management account of the organization and have the Northern Virginia region selected.
Now again, this is going to be very brief.
So it's probably not worth doing in your own environment unless you really want to.
Now what I want to demonstrate is termination protection.
So I'm going to go ahead and move to the EC2 console where I still have an EC2 instance running created in the previous demo lesson.
Now normally if I right click on this instance, I'm given the ability to stop the instance, to reboot the instance or to terminate the instance.
And this is assuming that the instance is currently in a running state.
Now if I go to terminate instance, straight away I'm presented with a dialogue where I need to confirm that I want to terminate this instance.
But it's easy to imagine that somebody who's less experienced with AWS can go ahead and terminate that and then click on terminate to confirm the process without giving it much thought.
And that can result in data loss, which isn't ideal.
What you can do to add another layer of protection is to right click on the instance, go to instance settings, and then change termination protection.
If you click that option, you get this dialogue where you can enable termination protection.
So I'm going to do that, I'm going to enable termination protection because this is an essential website for animals for life.
So I'm going to enable it and click on save.
And now that instance is protected against termination.
If I right click on this instance now and go to terminate instance and then click on terminate, I get a dialogue that I'm unable to terminate the instance.
The instance and then the instance ID may not be terminated, modify its disable API termination instance attribute and then try again.
So this instance is now protected against accidental termination.
Now this presents a number of advantages.
One, it protects against accidental termination, but it also adds a specific permission that is required in order to terminate an instance.
So you need the permission to disable this termination protection in addition to the permissions to be able to terminate an instance.
So you have the option of role separation.
You can either require people to have both the permissions to disable termination protection and permissions to terminate, or you can give those permissions to separate groups of people.
So you might have senior administrators who are the only ones allowed to remove this protection, and junior or normal administrators who have the ability to terminate instances, and that essentially establishes a process where a senior administrator is required to disable the protection before instances can be terminated.
It adds another approval step to this process, and it can be really useful in environments which contain business critical EC2 instances.
So you might not have this for development and test environments, but for anything in production, this might be a standard feature.
If you're provisioning instances automatically using cloud formation or other forms of automation, this is something that you can enable in an automated way as instances are launching.
So this is a really useful feature to be aware of.
And for the SysOps exam, it's essential that you understand when and where you'd use this feature.
And for both the SysOps and the developer exams, you should pay attention to this, disable API termination.
You might be required to know which attribute needs to be modified in order to allow terminations.
So really for both of the exams, just make sure that you're aware of exactly how this process works end to end, specifically the error message that you might get if this attribute is enabled and you attempt to terminate an instance.
At this point though, that is everything that I wanted to cover about this feature.
So right click on the instance, go to instance settings, change the termination protection and disable it, and then click on save.
One other feature which I want to introduce quickly, if we right click on the instance, go to instance settings, and then change shutdown behavior, you're able to specify whether an instance should move into a stop state when shut down, or whether you want it to move into a terminate state.
Now logically, the default is stop, but if you are running an environment where you don't want to consider the state of an instance to be valuable, then potentially you might want it to terminate when it shuts down.
You might not want to have an account with lots of stopped instances.
You might want the default behavior to be terminate, but this is a relatively niche feature, and in most cases, you do want the shutdown behavior to be stop rather than terminate, but it's here where you can change that default behavior.
Now at this point, that is everything I wanted to cover.
If you were following along with this in your own environment, you do need to clear up the infrastructure.
So click on the services dropdown, move to cloud formation, select the status checks and protect stack, and then click on delete and confirm that by clicking delete stack.
And once this stack finishes deleting all of the infrastructure that's been used during this demo and the previous one will be cleared from the AWS account.
If you've just been watching, you don't need to worry about any of this process, but at this point, we're done with this demo lesson.
So go ahead, complete the video, and once you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson either you're going to get the experience or you can watch me interacting with an Amazon machine image.
So we created an Amazon machine image or AMI in a previous demo lesson and if you recall it was customized for animals for life.
It had an install of WordPress and it had the Kause application installed and a custom login banner.
Now this is a really simple example of an AMI but I want to step you through some of the options that you have when dealing with AMIs.
So if we go to the EC2 console and if you are following along with this in your own environment do make sure that you're logged in as the IAM admin user of the general AWS account, so the management account of the organization and you have the Northern Virginia region selected.
The reason for being so specific about the region is that AMIs are regional entities so you create an AMI in a particular region.
So if I go and select AMIs under images within the EC2 console I'll see the animals for life AMI that I created in a previous demo lesson.
Now if I go ahead and change the region maybe from Northern Virginia which is US-East-1 to US-East- Ohio which is US-East-2 if I make that change what we'll see is we'll go back to the same area of the console only now we won't see any AMIs that's because an AMI is tied to the region in which it's created.
Every AMI belongs in one region and it has a unique AMI ID.
So let's move back to Northern Virginia.
Now we are able to copy AMIs between regions this allows us to make one AMI and use it for a global infrastructure platform so we can right-click and select copy AMI then select the destination region and then for this example let's say that I did want to copy it to Ohio then I would select that in the drop-down it would allow me to change the name if I wanted or I could keep it the same for description it would show that it's been copied from this AMI ID in this region and then it would have the existing description at the end.
So at this point I'm going to go ahead and click copy AMI and that process has now started so if I close down this dialogue and then change it from US East 1 to US East 2 so select that now we have a pending AMI and this is the AMI that's being copied from the US - East - one region into this region if we go ahead and click on snapshots under elastic block store then we're going to see the snapshot or snapshots which belong to this AMI.
Now depending on how busy AWS is it can take a few minutes for the snapshots to appear on this screen just go ahead and keep refreshing until they appear.
In our case we only have the one which is the boot volume that's used for our custom AMI.
Now the time taken to copy a snapshot between regions depends on many factors what the source and destination region are and the distance between the two the size of the snapshot and the amount of data it contains and it can take anywhere from a few minutes to much much longer so this is not an immediate process.
Once the snapshot copy completes then the AMI copy process will complete and that AMI is then available in the destination region but an important thing that I want to keep stressing throughout this course is that this copied AMI is a completely different AMI.
AMIs are regional don't fall for any exam questions which attempt to have you use one AMI for several regions.
If we're copying this animals for life AMI from one region to another region in effect we're creating two different AMIs.
So take note of this AMI ID in this region and if we switch back to the original source region so US - East - 1 note how this AMI has a different ID so they are different AMIs completely different AMIs you're creating a new one as part of the copy process.
So while the data is going to be the same conceptually they are completely separate objects and that's critical for you to understand both for production usage and when answering any exam questions.
Now while that's copying I want to demonstrate the other important thing which I wanted to show you in this demo lesson and that's permissions of AMIs.
So if I right-click on this AMI and edit AMI permissions by default an AMI is private.
Being private means that it's only accessible within the AWS account which has created the AMI and so only identities within that account that you grant permissions are able to access it and use it.
Now you can change the permission of the AMI you could set it to be public and if you set it to public it means that any AWS account can access this AMI and so you need to be really careful if you select this option because you don't want any sensitive information contained in that snapshot to be leaked to external AWS accounts.
A much safer way is if you do want to share the AMI with anyone else then you can select private but explicitly add other AWS accounts to be able to interact with this AMI.
So I could click in this box and then for example if I clicked on services and I just moved to the AWS organization service I'll open that in a new tab and let's say that I chose to share this AMI with my production account so I selected my production account ID and then I could add this into this box which would grant my production AWS account the ability to access this AMI.
Now no tell there's also this checkbox and this adds create volume permissions to the snapshots associated with this AMI so this is something that you need to keep in mind.
Generally if you are sharing an AMI to another account inside your organization then you can afford to be relatively liberal with permissions so generally if you're sharing this internally I would definitely check this box and that gives full permissions on the AMI as well as the snapshots so that anyone can create volumes from those snapshots as well as accessing the AMI.
So these are all things that you need to consider.
Generally it's much preferred to explicitly grant an AWS account permissions on an AMI rather than making that AMI public.
If you do make it public you need to be really sure that you haven't leaked any sensitive information, specifically access keys.
While you do need to be careful of that as well if you're explicitly sharing it with accounts, generally if you're sharing it with accounts then you're going to be sharing it with trusted entities.
You need to be very very careful if ever you're using this public option and I'll make sure I include a link attached to this lesson which steps through all of the best practice steps that you need to follow if you're sharing an AMI publicly.
There are a number of really common steps that you can use to minimize lots of common security issues and that's something you should definitely do if you're sharing an AMI.
Now if you want to do you could also share an AMI with an organizational unit or organization and you can do that using this option.
This makes it easier if you want to share an AMI with all AWS accounts within your organization.
At this point though I'm not going to do that we don't need to do that in this demo.
What we're going to do now though is move back to US-East-2.
That's everything I wanted to cover in this demo lesson.
Now this AMI is available we can right click and select D register and move back to US-East-1 and now that we've done this demo lesson we can do the same process with this AMI.
So we can right click select D register and that will remove that AMI.
Click on snapshots this is the snapshot created by this AMI so we need to delete this as well right click delete that snapshot confirm that and we'll need to do the same process in the region that we copied the AMI and the snapshots to.
So select US-East-2 it should be the only snapshot in the region make sure it is the correct one right click delete confirm that deletion and now you've cleared up all of the extra things created within this demo lesson.
Now that's everything that I wanted to cover I just wanted to give you an overview of how to work with AMIs from the console UI from a copying and sharing perspective.
Go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So the first step is to shut down this instance.
So we don't want to create an AMI from a running instance because that can cause consistency issues.
So we're going to close down this tab.
We're going to return to instances, right-click, and we're going to stop the instance.
We need to acknowledge this and then we need to wait for the instance to change into the stopped state.
It will start with stopping.
We'll need to refresh it a few times.
There we can see it's now in a stopped state and to create the AMI, we need to right-click on that instance, go down to Image and Templates, and select Create Image.
So this is going to create an AMI.
And first we need to give the AMI a name.
So let's go ahead and use Animals for Life template WordPress.
And we'll use the same for Description.
Now what this process is going to do is it's going to create a snapshot of any of the EBS volumes, which this instance is using.
It's going to create a block device mapping, which maps those snapshots onto a particular device ID.
And it's going to use the same device ID as this instance is using.
So it's going to set up the storage in the same way.
It's going to record that storage inside the AMI so that it's identical to the instance we're creating the AMI from.
So you'll see here that it's using EBS.
It's got the original device ID.
The volume type is set to the same as the volume that our instance is using, and the size is set to 8.
Now you can adjust the size during this process as well as being able to add volumes.
But generally when you're creating an AMI, you're creating the AMI in the same configuration as this original instance.
Now I don't recommend creating an AMI from a running instance because it can cause consistency issues.
If you create an AMI from a running instance, it's possible that it will need to perform an instance reboot.
You can force that not to occur, so create an AMI without rebooting.
But again, that's even less ideal.
The most optimal way for creating an AMI is to stop the instance and then create the AMI from that stopped instance, which will have fully consistent storage.
So now that that's set, just scroll down to the bottom and go ahead and click on Create Image.
Now that process will take some time.
If we just scroll down, look under Elastic Block Store and click on Snapshots.
You'll see that initially it's creating a snapshot of the boot volume of our original EC2 instance.
So that's the first step.
So in creating the AMI, what needs to happen is a snapshot of any of the EBS volumes attached to that EC2 instance.
So that needs to complete first.
Initially it's going to be an appending state.
We'll need to give that a few moments to complete.
If we move to AMIs, we'll see that the AMI is also creating it too.
It is in appending state and it's waiting for that snapshot to complete.
Now creating a snapshot is storing a full copy of any of the data on the original EBS volume.
And the time taken to create a snapshot can vary.
The initial snapshot always takes much longer because it has to take that full copy of data.
And obviously depending on the size of the original volume and how much data is being used, will influence how long a snapshot takes to create.
So the more data, the larger the volume, the longer the snapshot will take.
After a few more refreshes, the snapshot moves into a completed status and if we move across to AMIs under images, after a few moments this too will change away from appending status.
So let's just refresh it.
After a few moments, the AMI is now also in an available state and we're good to be able to use this to launch additional EC2 instances.
So just to summarize, we've launched the original EC2 instance, we've downloaded, installed and configured WordPress, configured that custom banner.
We've shut down the EC2 instance and generated an AMI from that instance.
And now we have this AMI in a state where we can use it to create additional instances.
So we're going to do that.
We're going to launch an additional instance using this AMI.
While we're doing this, I want you to consider exactly how much quicker this process now is.
So what I'm going to do is to launch an EC2 instance from this AMI and note that this instance will have all of the configuration that we had to do manually, automatically included.
So right click on this AMI and select launch.
Now this will step you through the launch process for an EC2 instance.
You won't have to select an AMI because obviously you are now explicitly using the one that you've just created.
You'll be asked to select all of the normal configuration options.
So first let's put a name for this instance.
So we'll use the name "instance" from AMI.
Then we'll scroll down.
As I mentioned moments ago, we don't have to specify an AMI because we're explicitly launching this instance from an AMI.
Scroll down.
You'll need to specify an instance type just as normal.
We'll use a free tier eligible instance.
This is likely to be T2 or T3.micro.
Below that, go ahead and click and select Proceed without a key pair not recommended.
Scroll down.
We'll need to enter some networking settings.
So click on Edit next to Network Settings.
Click in VPC and select A4L-VPC1.
Click in Subnet and make sure that SN-Web-A is selected.
Make sure the box is below a both set to enable for the auto assign IP settings.
Under Firewall, click on Select Existing Security Group.
Click in the Security Groups drop down and select AMI-Demo-Instance Security Group.
And that will have some random at the end.
That's absolutely fine.
Select that.
Scroll down.
And notice that the storage is configured exactly the same as the instance which you generated this AMI from.
Everything else looks good.
So we can go ahead and click on Launch Instance.
So this is launching an instance using our custom created AMI.
So let's close down this dialog and we'll see the instance initially in a pending state.
Remember, this is launching from our custom AMI.
So it won't just have the base Amazon Linux 2 operating system.
Now it's going to have that base operating system plus all of the custom configuration that we did before creating the AMI.
So rather than having to perform that same WordPress download installation configuration and the banner configuration each and every time, now we've baked that in to the AMI.
So now when we launch one instance, 10 instances, or 100 instances from this AMI, all of them are going to have this configuration baked in.
So let's give this a few minutes to launch.
Once it's launched, we'll select it, right click, select Connect, and then connect into it using EC2, Instance Connect.
Now one thing you will need to change because we're using a custom AMI, AWS can't necessarily detect the correct username to use.
And so you might see sometimes it says root.
Just go ahead and change this to EC2-user and then go ahead and click Connect.
And if everything goes well, you'll be connected into the instance and you'll see our custom Cowsay banner.
So all that configuration is now baked in and it's automatically included whenever we use that AMI to launch an instance.
If we go back to the AWS console and select instances, make sure we still have the instance from AMI selected and then locate its public IP version for address.
Don't use this link because that will use HTTPS instead, copy the IP address into your clipboard and open that in a new tab.
Again, all being well, you should see the WordPress installation dialogue and that's because we've baked in the installation and the configuration into this AMI.
So we've massively reduced the ongoing efforts required to launch an animals for life standard build configuration.
If we use this AMI to launch hundreds or thousands of instances each and every time we're saving all the time and the effort required to perform this configuration and using an AMI is just one way that we can automate the build process of EC2 instances within AWS.
And over the remainder of the course, I'm going to be demonstrating the other ways that you can use as well as comparing and contrasting the advantages and disadvantages of each of those methods.
Now that's everything that I wanted to cover in this demo lesson.
You've learned how to create an AMI and how to use it to save significant effort on an ongoing basis.
So let's clear up all of the infrastructure that we've used in this lesson.
So move back to the AWS console, close down this tab, go back to instances, and we need to manually terminate the instance that we created from our custom AMI.
So right click and then go to terminate instance.
You'll need to confirm that.
That will start the process of termination.
Now we're not going to delete the AMI or snapshots because there's a demo coming up later in this section of the course where you're going to get the experience of copying and sharing an AMI between AWS regions.
So we're going to need to leave this in place.
So we're not going to delete the AMI or the snapshots created within this lesson.
Verify that that instance has been terminated and once it has, click on services, go to cloud formation, select the AMI demo stack, select delete and then confirm that deletion.
And that will remove all of the infrastructure that we've created within this demo lesson.
And at this point, that's everything that I wanted you to do in this demo.
So go ahead, complete this video.
And when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you'll be creating an AMI from a pre-configured EC2 instance.
So you'll be provisioning an EC2 instance, configuring it with a popular web application stack and then creating an AMI of that pre-configured web application.
Now you know in the previous demo where I said that you would be implementing the WordPress manual install once?
Well I might have misled you slightly but this will be the last manual install of WordPress in the course, I promise.
What we're going to do together in this demo lesson is create an Amazon Linux AMI for the animals for life business but one which includes some custom configuration and an install of WordPress ready and waiting to be initially configured.
So this is a fairly common use case so let's jump in and get started.
Now in order to perform this demo you're going to need some infrastructure, make sure you're logged into the general AWS account, so the management account of the organization and as always make sure that you have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link, go ahead and click that link.
This will open the quick create stack screen, it should automatically be populated with the AMI demo as the stack name, just scroll down to the bottom, check this capabilities acknowledgement box and then click on create stack.
We're going to need this stack to be in a create complete state so go ahead and pause the video and we can resume once the stack moves into create complete.
Okay so that stacks now moved into a create complete state, we're good to continue with the demo.
Now you're going to be using some command line commands within an EC2 instance as part of creating an Amazon machine image so also attached to this lesson is the lessons command document which contains all of those commands so go ahead and open that document.
Now you might recognize these as the same commands that you used when you were performing a manual WordPress installation and that's the case we're running the same manual installation process as part of setting up our animals for life AMI so you're going to need all of these commands but as you've already experienced them in the previous demo lesson I'm going to run through them a lot quicker in this demo lesson so go back to the AWS console and we need to move to the EC2 area of the console so click on the services drop down, type EC2 into this search box and then open that in a new tab.
Once you there go ahead and click on running instances, close down any dialogues about any console changes we want to maximize the amount of screen space that we have, we're going to connect to this A4L public EC2 instance this is the instance that we're going to use to create our AMI so we're going to set the instance up manually how we want it to be and then we're going to use it to generate an AMI so we need to connect to this instance so right click select connect we're going to use EC2 instance connect to do the work within our browser so make sure the username is EC2-user and then connect to this instance then once connected we're going to run through the commands to install WordPress really quickly we're going to start again by setting the variables that will use throughout the installation so you can just go ahead and copy and paste those straight in and press enter now we're going to run through all of the next set of commands really quickly because you use them in the previous demo lesson so first we're going to go ahead and install the MariaDB server Apache and the Wget utility while that's installing copy all of the commands from step 3 so these are commands which enable and start Apache and MariaDB go ahead and paste all of those four in and press enter so now Apache and MariaDB are both set to start when the instance boots as well as being set to currently started I'll just clear the screen to make this easier to see next we're going to set the DB root password again that's this command using the contents of the variable that you set at the start next we download WordPress once it's downloaded we move into the web root folder we extract the download we copy the files from within the WordPress folder that we've just extracted into the current folder which is the web root once we've done that we remove the WordPress folder itself and then we tidy up by deleting the download I'm going to clear the screen we copy the template configuration file into its final file name so wp-config.php then we're going to replace the placeholders in that file we're going to start with the database name using the variable that you set at the start next we're going to use the database user which you also set at the start and finally the database password and then we're going to set the ownership on all of these files to be the Apache user and the Apache group clear the screen next we need to create the DB setup script that are demonstrated in the previous demo so we need to run a collection of commands the first to enter the create database command the next one to enter the create user command and set that password the next one to grant permissions on the database to that user then flush the permissions then we need to run that script using the MySQL command line interface that runs all of those commands and performs all of those operations and then we tidy up by deleting that file now at this point we've done the exact same process that we did in the previous demo we've installed and set up WordPress and if everything's working okay we can go back to the AWS console click on instances select the running a4l-public ec2 instance copy down its IP address again make sure you copy that down don't click this link and then open that in a new tab if everything's working as expected you should see the WordPress installation dialogue now this time because we're creating an AMI we don't want to perform the installation we want to make sure that when anyone uses this AMI they're also greeted with this installation so we're going to leave this at this point we're not going to perform the installation instead we're going to go back to the ec2 instance now because this ec2 instance is for the animals for life business we want to customize it and make sure that everybody knows that this is an animals for life ec2 instance now to do that we're going to install an animal themed utility called cow say I'm going to clear the screen to make it easier to see and then just to demonstrate exactly what cow say does I'm going to run a cow say oh hi and if all goes well we see a cow using ASCII art saying the oh hi message that we just typed so we're going to use this to create a message of the day welcome when anyone connects to this ec2 instance to do that we're going to create a file inside the configuration folder of this ec2 instance so we're going to use shudu nano and we're going to create this file so forward slash etc forward slash update hyphen motd dot d forward slash 40 hyphen cow so we're going to create that file this is the file that's going to be used to generate the output when anyone logs in to this ec2 instance so we're going to copy in these two lines and then press enter so this means when anyone logs into the ec2 instance they're going to get an animal themed welcome so use control o to save that file and control x to exit clear the screen to make it easier to see we're going to make sure that file that we've just edited has the correct permissions then we're going to force an update of the message of the day so this is going to be what's displayed when anyone logs into this instance and then finally now that we've completed this configuration we're going to reboot this ec2 instance so we're going to use this command to reboot it and just to illustrate how this works I'm going to close down that tab and return to the ec2 console give this a few moments to restart that should have rebooted by now so we're going to select it right click go to connect again use ec2 instance connect assuming everything's working now when we connect to the instance we'll see an animal themed login banner so this is just a nice way that we can ensure that anyone logging into this instance understands that a he uses the Amazon Linux 2 AMI and be that it belongs to animals for life so we've created this instance using the Amazon Linux 2 AMI we've performed the WordPress installation and initial configuration we've customized the banner and now we're going to use this as our template instance to create our AMI that can then be used to launch other instances okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part two
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So this is the folder containing the WordPress installation files.
Now there's one particular file that's really important, and that's the configuration file.
So there's a file called WP-config-sample, and this is actually the file that contains a template of the configuration items for WordPress.
So what we need to do is to take this template and change the file name to be the proper file name, so wp-config.php.
So we're going to create a copy of this file with the correct name.
And to do that, we run this command.
So we're copying the template or the sample file to its real file name, so wp-config.php.
And this is the name that WordPress expects when it initially loads its configuration information.
So run that command, and that now means that we have a live config file.
Now this command isn't in the instructions, but if I just take a moment to open up this file, you don't need to do this.
I'm just demonstrating what's in this file for your benefit.
But if I run a sudo nano, and then wp, and then hyphen-config, and then php, this is how the file looks.
So this has got all the configuration information in.
So it stores the database name, the database user, the database host, and lots of other information.
Now notice how it has some placeholders.
So this is where we would need to replace the placeholders with the actual configuration information.
So the database name itself, the host name, the database username, the database password, all that information would need to be replaced.
Now we're not going to type this in manually, so I'm going to control X to exit out of this, and then clear the screen again to make it easy to see.
We're going to use the Linux utility sed, or S-E-D.
And this is a utility which can perform a search and replace within a text file.
It's actually much more complex and capable than that.
It can perform many different manipulation operations.
But for this demonstration, we're going to use it as a simple search and replace.
Now we're going to do this a number of times.
First, we're going to run this command, which is going to replace this placeholder.
Remember, this is one of the placeholders inside the configuration file that I've just demonstrated, wp-config.
We're going to replace the placeholder here with the contents of the variable name, dbname, that we set at the start of this demo.
So this is going to replace the placeholder with our actual database name.
So I'm going to enter that so you can do the same.
We're going to run the sed command again, but this time it's going to replace the username placeholder with the dbuser variable that we set at the start of this demo.
So use that command as well.
And then lastly, it will do the same for the database password.
So type or copy and paste this command and press enter.
And that now means that this wp-config has the actual configuration information inside.
And just to demonstrate that, you don't need to do this part.
I'll just do it to demonstrate.
If I edit this file again, you'll see that all of these placeholders have actually been replaced with actual values.
So I'm going to control X out of that and then clear the screen.
And that concludes the configuration for the WordPress application.
So now it's ready.
Now it knows how to communicate with the database.
What we need to do to finish off the configuration though is just to make sure that the web server has access to all of the files within this folder.
And to do that, we use this command.
So we're making sure that we use the shown command or chown and set the ownership of all of the files in this folder and any subfolders to be the Apache user and the Apache group.
And the Apache user and Apache group belong to the web server.
So this just makes sure that the web server is able to access and control all of the files in the web root folder.
So run that command and press enter.
And that concludes the installation part of the WordPress application.
There's one final thing that we need to do and that's to create the database that WordPress will use.
So I'm going to clear the screen to make it easy to see.
Now what we're going to do in order to configure the database is we're going to make a database setup script.
We're going to put this script inside the forward slash TMP folder and we're going to call it DB.setup.
So what we need to do is enter the commands into this file that will create the database.
After the database is created, it needs to create a database user and then it needs to grant that user permissions on that database.
Now again, instead of manually entering this, we're going to use those variable names that were created at the start of the demo.
So we're going to run a number of commands.
These are all in the lessons commands document.
The first one is this.
So this echoes this text and because it has a variable name in, this variable name will be replaced by the actual contents of the variable.
Then it's going to take this text with the replacement of the contents of this variable and it's going to enter that into this file.
So forward slash TMP, forward slash DB setup.
So run that and that command is going to create the WordPress database.
Then we're going to use this command and this is the same so it echoes this text but it replaces these variable names with the contents of the variables.
This is going to create our WordPress database user.
It's going to set its password and then it's going to append this text to the DB setup file that we're creating.
Now all of these are actually database commands that we're going to execute within the MariaDB database.
So enter that to add that line to DB.setup.
Then we have another line which uses the same architecture as the ones above it.
It echoes the text.
It replaces these variable names with the contents and then outputs that to this DB.setup file and this command grants our database user permissions to our WordPress database.
And then the last command is this one which just flushes the privileges and again we're going to add this to our DB.setup script.
So now I'm just going to cat the contents of this file so you can just see exactly what it looks like.
So cat and then space forward slash TMP, forward slash DB.setup.
So as you'll see it's replaced all of these variable names with the actual contents.
So this is what the contents of this script actually looks like.
So these are commands which will be run by the MariaDB database platform.
To run those commands we use this.
So this is the MySQL command line interface.
So we're using MySQL to connect to the MariaDB database server.
We're using the username of root.
We're passing in the password and then using the contents of the DB root password variable.
And then once we authenticate the database we're passing in the contents of our DB.setup script.
And so this means that all of the lines of our DB.setup script will be run by the MariaDB database and this will create the WordPress database, the WordPress user and configure all of the required permissions.
So go ahead and press enter.
That command is run by the MariaDB platform and that means that our WordPress database has been successfully configured.
And then lastly just to keep things secure because we don't want to leave files laying around on the file system with authentication information inside.
We're just going to run this command to delete this DB.setup file.
Okay, so that concludes the setup process for WordPress.
It's been a fairly long intensive process but that now means that we have an installation of WordPress on this EC2 instance, a database which has been installed and configured.
So now what we can do is to go back to the AWS console, click on instances.
We need to select the A4L-PublicEC2 and then we need to locate its IP address.
Now make sure that you don't use this open address link because this will attempt to open the IP address using HTTPS and we don't have that configured on this WordPress instance.
Instead, just copy the IP address into your clipboard and then open that in a new tab.
If everything's successful, you should see the WordPress installation dialog and just to verify this is working successfully, let's follow this process through.
So pick English, United States for the language.
For the blog title, just put all the cats and then admin as the username.
You can accept the default strong password.
Just copy that into your clipboard so we can use it to log in in a second and then just go ahead and enter your email.
It doesn't have to be a correct one.
So I normally use test@test.com and then go ahead and click on install WordPress.
You should see a success dialog.
Go ahead and click on login.
Username will be admin, the password that you just copied into your clipboard and then click on login.
And there you go.
We've got a working WordPress installation.
We're not going to configure it in any detail but if you want to just check out that it works properly, go ahead and click on this all the cats at the top and then visit site and you'll be able to see a generic WordPress blog.
And that means you've completed the installation of the WordPress application and the database using a monolithic architecture on a single EC2 instance.
So this has been a slow process.
It's been manual and it's a process which is wide open for mistakes to be made at every point throughout that process.
Can you imagine doing this twice?
What about 10 times?
What about a hundred times?
It gets pretty annoying pretty quickly.
In reality, this is never done manually.
We use automation or infrastructure as code systems such as cloud formation.
And as we move through the course, you're going to get experience of using all of these different methods.
Now that we're close to finishing up the basics of VPC and EC2 within the course, things will start to get much more efficient quickly because I'm going to start showing you how to use many of the automation and infrastructure as code services within AWS.
And these are really awesome to use.
And you'll see just how much power is granted to an architect, a developer, or an engineer by using these services.
For now though, that is the end of this demo lesson.
Now what we're going to do is to clear up our account.
So we need to go ahead and clear all of this infrastructure that we've used throughout this demo lesson.
To do that, just move back to the AWS console.
If you still have the cloud formation tab open and move back to that tab, otherwise click on services and then click on cloud formation.
If you don't see it anywhere, you can use this box to search for it, select the word, press stack, select delete, and then confirm that deletion.
And that will delete the stack, clear up all of the infrastructure that we've used throughout this demo lesson and the account will now be in the same state as it was at the start of this lesson.
So from this point onward in the course, we're going to start using automation.
Now there is a lesson coming up in a little while in this section of the course, where you're going to create an Amazon machine image which is going to contain a pre-baked copy of the WordPress application.
So as part of that lesson, you are going to be required to perform one more manual installation of WordPress, but that's going to be part of automating the installation.
So you'll start to get some experience of how to actually perform automated installations and how to design architectures which have WordPress as a component.
At this point though, that's everything I wanted to cover.
So go ahead, complete this video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson we're going to be doing something which I really hate doing and that's using WordPress in a course as an example.
Joking aside though WordPress is used in a lot of courses as a very simple example of an application stack.
The problem is that most courses don't take this any further.
But in this course I want to use it as one example of how an application stack can be evolved to take advantage of AWS products and services.
What we're going to be using WordPress for in this demo is to give you experience of how a manual installation of a typical application stack works in EC2.
We're going to be doing this so you can get the experience of how not to do things.
My personal belief is that to fully understand the advantages that automation features within AWS provide, you need to understand what a manual installation is like and what problems you can experience doing that manual installation.
As we move through the course we can compare this to various different automated ways of installing software within AWS.
So you're going to get the experience of bad practices, good practices and the experience to be able to compare and contrast between the two.
By the end of this demonstration you're going to have a working WordPress site but it won't have any high availability because it's running on a single EC2 instance.
It's going to be architecturally monolithic with everything running on the one single instance.
In this case that means both the application and the database.
The design is fairly straightforward.
It's just the Animals for Life VPC.
We're going to be deploying the WordPress application into a single subnet, the WebA public subnet.
So this subnet is going to have a single EC2 instance deployed into it and then you're going to be doing a manual install onto this instance and the end result is a working WordPress installation.
At this point it's time to get started and implement this architecture.
So let's go ahead and switch over to our AWS console.
To get started with this demo lesson you're going to need to do a few preparation steps.
First just make sure that you're logged in to the general AWS account, so the management account of the organization and as always make sure you have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment for the base infrastructure that we're going to use.
So go ahead and open the one-click deployment link that's attached to this lesson.
That link is going to take you to the Quick Create Stack screen.
Everything should be pre-populated.
The stack name should be WordPress.
All you need to do is scroll down towards the bottom, check this capabilities box and then click on Create Stack.
And this stack is going to need to be in a Create Complete state before we move on with the demo lesson.
So go ahead and pause this video, wait for the stack to change to Create Complete and then we're good to continue.
Also attached to this lesson is a Lessons Command document which lists all of the commands that you'll be using within the EC2 instance throughout this demo lesson.
So go ahead and open that as well.
So that should look something like this and these are all of the commands that we're going to be using.
So these are the commands that perform a manual WordPress installation.
Now that that stack's completed and we've got the Lesson Commands document open, the next step is to move across to the EC2 console because we're going to actually install WordPress manually.
So click on the Services drop-down and then locate EC2 in this All Services part of the screen.
If you've recently visited it, it should be in the Recently Visited section under Favorites or you can go ahead and type EC2 in the search box and then open that in a new tab.
And then click on Instances running and you should see one single instance which is called A4L-PublicEC2.
Go ahead and right-click on this instance.
This is the instance we'll be installing WordPress within.
So right-click, select Connect.
We're going to be using our browser to connect to this instance so we'll be using Instance Connect just verify that the username is EC2-user and then go ahead and connect to this instance.
Now again, I fully understand that a manual installation of WordPress might seem like a waste of time but I genuinely believe that you need to understand all the problems that come from manually installing software in order to understand the benefits which automation provides.
It's not just about saving time and effort.
It's also about error reduction and the ability to keep things consistent.
Now I always like to start my installations or my scripts by setting variables which will store the configuration values that everything from that point forward will use.
So we're going to create four variables.
One for the database name, one for the database user, one for the database password and then one for the root or admin password of the database server.
So let's start off by using the pre-populated values from the Lessened Commands documents.
So that's all of those variables set and we can confirm that those are working by typing echo and then a space and then a dollar and then the name of one of those variables.
So for example, dbname and press Enter and that will show us the value stored within that variable.
So now we can use these later points of the installation.
So at this point I'm going to clear the screen to keep it easy to see and stage two at this installation process is to install some system software.
So there are a few things that we need to install in order to allow a WordPress installation.
We'll install those using the DNF package manager.
We need to give it admin privileges which is why we use shudu and then the packages that we're going to install are the database server which is Maria db-server the Apache web server which is HTTPD and then a utility called Wget which we're going to use to download further components of the installation.
So go ahead and type or copy and paste that command and press Enter and that installation process will take a few moments and it will go through installing that software and any of the prerequisites.
They're done so I'll clear the screen to keep this easy to read.
Now that all those packages are installed we need to start both the web server and the database server and ensure that both of them are started if ever the machine is restarted.
So to do that we need to enable and start those services.
So enabling and starting means that both of the services are both started right now and they'll start if the machine reboots.
So first we'll use this command.
So we're using admin privileges again, systemctl which allows us to start and stop system processes and then we use enable and then HTTPD which is the web server.
So type and press enter and that ensures that the web server is enabled.
We need to run the same command again but this time specifying MariaDB to ensure that the database server is enabled.
So type or copy and paste and press enter.
So that means both of those processes will start if ever the instance is rebooted and now we need to manually start both of those so they're running and we can interact with them.
So we need to use the same structure of command but instead of enable we need to start both of these processes.
So first the web server and then the database server.
So that means the CC2 instance now has a running web and database server both of which are required for WordPress.
So I'll clear the screen to keep this easy to read.
Next we're going to move to stage 4 and stage 4 is that we need to set the root password of the database server.
So this is the username and password that will be used to perform all of the initial configuration of the database server.
Now we're going to use this command and you'll note that for password we're actually specifying one of the variables that we configured at the start of this demo.
So we're using the DB root password variable that we configured right at the start.
So go ahead and copy and paste or type that in and press enter and that sets the password for the root user of this database platform.
The next step which is step 5 is to install the WordPress application files.
Now to do that we need to install these files inside what's known as the web root.
So whenever you browse to a web server either using an IP address or a DNS name if you don't specify a path so if you just use the server name for example netflix.com then it loads those initial files from a folder known as the web root.
Now on this particular server the web root is stored in /varr/www/html so we need to download WordPress into that folder.
Now we're going to use this command Wget and that's one of the packages that we installed at the start of this lesson.
So we're giving it admin privileges and we're using Wget to download latest.tar.gz from wordpress.org and then we're putting it inside this web root.
So /varr/www/html.
So go ahead and copy and paste or type that in and press enter.
That'll take a few moments depending on the speed of the WordPress servers and that will store latest.tar.gz in that web root folder.
Next we need to move into that folder so cd space /varr/www/html and press enter.
We need to use a Linux utility called tar to extract that file.
So sudo and then tar and then the command line options -zxvf and then the name of the file so latest.tar.gz So copy and paste or type that in and press enter and that will extract the WordPress download into this folder.
So now if we do an ls -la you'll see that we have a WordPress folder and inside that folder are all of the application files.
Now we actually don't want them inside a WordPress folder.
We want them directly inside the web root.
So the next thing we're going to do is this command and this is going to copy all of the files from inside this WordPress folder to . and . represents the current folder.
So it's going to copy everything inside WordPress into the current working directory which is the web root directory.
So enter that and that copies all of those files.
And now if we do another listing you'll see that we have all of the WordPress application files inside the web root.
And then lastly for the installation part we need to tidy up the mess that we've made.
So we need to delete this WordPress folder and the download file that we just created.
So to do that we'll run an rm -r and then WordPress to delete that folder.
And then we'll delete the download with sudo rm and then a space and then the name of the file.
So latest.tar.gz.
And that means that we have a nice clean folder.
So I'll clear the screen to make it easy to see.
And then I'll just do another listing.
Okay so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead complete the video and when you're ready join me in part two.
-
-
www.biorxiv.org www.biorxiv.org
-
Editors Assessment:
PhysiCell is an open source multicellular systems simulator for studying many interacting cells in dynamic tissue microenvironments. As part of the PhysiCell ecosystem of tools and modules this paper presents a PhysiCell addon, PhysiMeSS (MicroEnvironment Structures Simulation) which allows the user to accurately represent the extracellular matrix (ECM) as a network of fibres. This can specify rod-shaped microenvironment elements such as the matrix fibres (e.g. collagen) of the ECM, allowing the PhysiCell user the ability to investigate physical interactions with cells and other fibres. Reviewers asked for additional clarification on a number of features. And the paper now clear future releases will provide full 3D compatibility and include working on fibrogenesis, i.e. the creation of new ECM fibres by cells.
This evaluation refers to version 1 of the preprint
-
AbstractThe extracellular matrix is a complex assembly of macro-molecules, such as collagen fibres, which provides structural support for surrounding cells. In the context of cancer metastasis, it represents a barrier for the cells, that the migrating cells needs to degrade in order to leave the primary tumor and invade further tissues. Agent-based frameworks, such as PhysiCell, are often use to represent the spatial dynamics of tumor evolution. However, typically they only implement cells as agents, which are represented by either a circle (2D) or a sphere (3D). In order to accurately represent the extracellular matrix as a network of fibres, we require a new type of agent represented by a segment (2D) or a cylinder (3D).In this article, we present PhysiMeSS, an addon of PhysiCell, which introduces a new type of agent to describe fibres, and their physical interactions with cells and other fibres. PhysiMeSS implementation is publicly available at https://github.com/PhysiMeSS/PhysiMeSS, as well as in the official Physi-Cell repository. We also provide simple examples to describe the extended possibilities of this new framework. We hope that this tool will serve to tackle important biological questions such as diseases linked to dis-regulation of the extracellular matrix, or the processes leading to cancer metastasis.
This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.136), and has published the reviews under the same license. It is also part of GigaByte’s PhysiCell Ecosystem series for tools that utilise or build upon the PhysiCell platform: https://doi.org/10.46471/GIGABYTE_SERIES_0003 These reviews are as follows.
Reviewer 1. Erika Tsingos
One important aspect that the authors need to be aware of and mention explicitly is that their algorithm for fiber set-up leads to differences in fiber concentration and orientation at the boundary, because fibers that are not wholly contained in the simulation box are discarded. The effect of this choice can be seen upon close inspection of Figure 2: In the left panel, fibers align tangentially to the boundary, so locally the orientation is not isotropic. Similarly, in Figure 2 middle and right panels, the left and right boundaries have lower local fiber concentration. This issue could potentially affect the outcome of a simulation, so it's important that readers are made aware so that if necessary they can address this with a modified algorithm. ----- Minor comments: In the abstract, the phrasing implies agent-based frameworks are only used for tumour evolution. I would rephrase such that it is clear that tumour evolution is one example among many possible applications. I suggest adding a dash to improve readability in the following sentence in the introduction: "However, we note that the applications of PhysiMeSS stretch beyond those wanting to model the ECM -- as the new cylindrical/rod-shaped agents could be used to model blood vessel segments or indeed create obstacles within the domain." In the implementation section, add a short sentence to clarify if PhysiMeSS is "backwards compatible" with older PhysiCell models that do not use the fiber agent. Notation in equations: A single vertical line is absolute value, and two vertical lines is Euclidean norm? The explanation of Equation 1 implies that the threshold v_{max} should limit the parallel force, but the text does not explicitly say if ||v|| is restricted to be less or equal to v_{max}. Is that the case? In Equation 2, I don't see the need to square the terms in parenthesis. If |v*l_f| is an absolute value it is always positive. Since l_f is normalized the value of the dot product is only between 0 and the magnitude of v. Am I missing something? Are p_x and p_y in the moment arm magnitude coordinates with respect to the fiber center? Table 2: It would be helpful to have a separate column with the corresponding symbols used throughout the text and equations. Figure 5/6: Missing crosslinker color legend. ----- Typos/grammar: "As an aside, an not surprisingly," --> As an aside, and not surprisingly, "This may either be because as a cell tries to migrate through the domain fibres which act as obstacles in the cell’s path," --> remove the word "which"
Reviewer 2. Jinseok Park
Noel et al. introduce PhysiMess - a new PhysiCell Addon for ECM remodeling. This new addon is a powerful tool to simulate ECM remodeling and has the potential to be applied to mechanobiology research, which makes my enthusiasm high. I would like to give a few suggestions. 1) Basically, it is an addon of PhysiCell. So, I suggest describing PhysiCell and how to add the addon for readers who are not familiar with these tools. Also, screen captures of tool manipulation would be very helpful. 2) Figure 2 and 3 exhibit the outcome of the addon showing ECM remodeling. I would suggest to show actual ECM images modeled by the addon. 3) The equations reflect four interactions, and in my understanding, the authors describe cell-fibre, fiber-cell, and fiber-fiber interactions. I suggest generating an example corresponding to each interaction's modulation and explaining how the add-on results explain the physiological phenomena. For instance, focal adhesion may be a key modulator of cell-fibre or fiber-cell interaction, presumably, alpha or beta fiber. I would demonstrate how the different parameters generate different results and explain the physiological situation modeled by the results. 4) Similarly, Figure 5 and Figure 6 only show one example and no comparison with other conditions. For example, It would be better to exhibit no pressure/pressure conditions. It may help readers estimate how the pressure impacts cell proliferation.
Reviewer 3. Simon Syga
The presented paper "PhysiMeSS - A New PhysiCell Addon for Extracellular Matrix Modelling" is a useful extension to the popular simulation framework PhysiCell. It enables the simulation of cell populations interacting with the extracellular matrix, which is represented by a set of line segments (2D) or cylinders (3D). These represend a new kind of agent in the simulation framework. The paper outlines the basic implementation, properties and interactions of these agents. I recommend publication after a small set of minor issues have been addressed. Please refer to the attached marked-up PDF file for these minor issues and suggestions. https://gigabyte-review.rivervalleytechnologies.comdownload-api-file?ZmlsZV9wYXRoPXVwbG9hZHMvZ3gvVFIvNTUwL2d4LVRSLTE3MTk5NDYwNjlfU1kucGRm
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video we're going to interact with instant store volumes.
Now this part of the demo does come at a cost.
This isn't inside the free tier because we're going to be launching some instances which are fairly large and are not included in the free tier.
The demo has a cost of approximately 13 cents per hour and so you should only do this part of the demo if you're willing to accept that cost.
If you don't want to accept those costs then you can go ahead and watch me perform these within my test environment.
So to do this we're going to go ahead and click on instances and we're going to launch an instance manually.
So I'm going to click on launch instances.
We're going to name the instance, Instance Store Test so put that in the name box.
Then scroll down, pick Amazon Linux, make sure Amazon Linux 2023 is selected and the architecture needs to be 64 bit x86.
Scroll down and then in the instance type box click and we need to find a different type of instance.
This is going to be one that supports instance store volumes.
So scroll down and we're looking for m5dn.large.
This is a type of instance which includes one instance store volume.
So select that then scroll down a little bit more and under key pair click in the box and select proceed without a key pair not recommended.
Scroll down again and under network settings click on edit.
Click in the VPC drop down and select a4l-vpc1.
Under subnet make sure sn-web-a is selected.
Make sure enabled is selected for both of the auto assign public IP drop downs.
Then we're going to select an existing security group click the drop down select the EBS demo instance security group.
It will have some random after it but that's okay.
Then scroll down and under storage we're going to leave all of the defaults.
What you are able to do though is to click on show details next to instance store volumes.
This will show you the instance store volumes which are included with this instance.
You can see that we have one instance store volume it's 75 GB in size and it has a slightly different device name.
So dev nvme0n1.
Now all of that looks good so we're just going to go ahead and click on launch instance.
Then click on view all instances and initially it will be an appending state and eventually it will move into a running state.
Then we should probably wait for the status check column to change from initializing to 2 out of 2.
Go ahead and pause the video and wait for this status check to change to be fully green.
It should show 2 out of 2 status checks.
That's now in a running state with 2 out of 2 checks so we can go ahead and connect to this instance.
Before we do though just go ahead and select the instance and just note the instances public IP version 4 address.
Now this address is really useful because it will change if the EC2 instance moves between EC2 hosts.
So it's a really easy way that we can verify whether this instance has moved between EC2 hosts.
So just go ahead and note down the IP address of the instance that you have if you're performing this in your own environment.
We're going to go ahead and connect to this instance though so right click, select connect, we'll be choosing instance connect, go ahead and connect to the instance.
Now many of these commands that we'll be using should by now be familiar.
Just refer back to the lessons command document if you're unsure because we'll be using all of the same commands.
First we need to list all of the block devices which are attached to this instance and we can do that with LSBLK.
This time it looks a little bit different because we're using instance store rather than EBS additional volumes.
So in this particular case I want you to look for the 8G volume so this is the root volume.
This represents the boot or root volume of the instance.
Remember that this particular instance type came with a 75GB instance store volume so we can easily identify it's this one.
Now to check that we can verify whether there's a file system on this instance store volume.
If we run this command, so the same command we've used previously so shudu file -s and then the id of this volume so dev nvme1n1, you'll see it reports data.
And if you recall from the previous parts of this demo series this indicates that there isn't a file system on this volume.
We're going to create one and to do that we use this command again it's the same command that we've used previously just with the new volume id.
So press enter to create a file system on this raw block device this instance store volume and then we can run this command again to verify that it now has a file system.
To mount it we can follow the same process that we did in the earlier stages of this demo series.
We'll need to create a directory for this volume to be mounted into this time we'll call it forward slash instance store.
So create that folder and then we're going to mount this device into that folder so shudu mount then the device id and then the mount point or the folder that we've previously created.
So press enter and that means that this block device this instance store volume is now mounted into this folder.
And if we run a df space -k and press enter you can see that it's now mounted.
Now we're going to move into that folder by typing cd space forward slash instance store and to keep things efficient we're going to create a file called instance store dot txt.
And rather than using an editor we'll just use shudu touch and then the name of the file and this will create an empty file.
If we do an LS space -la and press enter you can see that that file exists.
So now that we have this file stored on a file system which is running on this instance store volume let's go ahead and reboot this instance.
Now we need to be careful we're not going to stop and start the instance we're going to restart the instance.
Restarting is different than stop and start.
So to do that we're going to close this tab move back to the ec2 console so click on instances right click on instance store test and select reboot instance and then confirm that.
Note what this IP address is before you initiate the reboot operation and then just give this a few minutes to reboot.
Then right click and select connect.
Using instance connect go ahead and connect back to the instance.
And again if it appears to hang at this point then you can just wait for a few moments and then connect again.
But in this case I've left it long enough and I'm connected back into the instance.
Now once I'm back in the instance if I run a df space -k and press enter note how that file system is not mounted after the reboot.
Now that's fine because we didn't configure the Linux operating system to mount this file system when the instance is restarted.
But what we can do is do an LS BLK again to list the block device.
We can see that it's still there and we can manually mount it back in the same folder as it was before the reboot.
To do that we run this command.
So it's mounting the same volume ID the same device ID into the same folder.
So go ahead and run that command and press enter.
Then if we use cd space forward slash and then instance store press enter and then do an LS space -la we can see that this file is still there.
Now the file is still there because instance store volumes do persist through the restart of an EC2 instance.
Restarting an EC2 instance does not move the instance from one EC2 host to another.
And because instance store volumes are directly attached to an EC2 host this means that the volume is still there after the machine has restarted.
Now we're going to do something different though.
Close this tab down.
Move back to instances.
Again pay special attention to this IP address.
Now we're going to right click and stop the instance.
So go ahead and do that and confirm it if you're doing this in your own environment.
Watch this public IP v4 address really carefully.
We'll need to wait for the instance to move into a stopped state which it has and if we select the instance note how the public IP version for address has been unallocated.
So this instance is now not running on an EC2 host.
Let's right click.
Go to start instance and start it up again.
Only to give that a few moments again.
It'll move into a running state but notice how the public IP version for address has changed.
This is a good indication that the instance has moved from one EC2 host to another.
So let's give this instance a few moments to start up.
And once it has right click, select connect and then go ahead and connect to the instance using instance connect.
Once connected go ahead and run an LS BLK and press enter and you'll see it appears to have the same instance store volume attached to this instance.
It's using the same ID and it's the same size.
But let's go ahead and verify the contents of this device using this command.
So shudu file space -s space and then the device ID of the instance store volume.
For press enter, now note how it shows data.
So even though we created a file system in the previous step after we've stopped and started the instance, it appears this instance store volume has no data.
Now the reason for that is when you restart an EC2 instance, it restarts on the same EC2 host.
But when you stop and start an EC2 instance, which is a distinctly different operation, the EC2 instance moves from one EC2 host to another.
And that means that it has access to completely different instance store volumes than it did on that previous host.
It means that all of the data, so the file system and the test file that we created on the instance store volume, before we stopped and started this instance, all of that is lost.
When you stop and start an EC2 instance or for any other reason, which means the instance moves from one host to another, all of the data is lost.
So instance store volumes are ephemeral.
They're not persistent and you can't rely on them to keep your data safe.
And it's really important that you understand that distinction.
If you're doing the developer or sysop streams, it's also important that you understand the difference between an instance restart, which keeps the same EC2 host, and a stop and start, which moves an instance from one host to another.
The format means you're likely to keep your data, but the latter means you're guaranteed to lose your data when using instance store volumes.
EBS on the other hand, as we've seen, is persistent and any data persists through the lifecycle of an EC2 instance.
Now with that being said, though, that's everything that I wanted to demonstrate within this series of demo lessons.
So let's go ahead and tidy up the infrastructure.
Close down this tab, click on instances.
If you follow this last part of the demo in your own environment, go ahead and right click on the instance store test instance and terminate that instance.
That will delete it along with any associated resources.
We'll need to wait for this instance to move into the terminated state.
So give that a few moments.
Once that's terminated, go ahead and click on services and then move back to the cloud formation console.
You'll see the stack that you created using the one click deploy at the start of this lesson.
Go ahead and select that stack, click on delete and then delete stack.
And that's going to put the account back in the same state as it was at the start of this lesson.
So it will remove all of the resources that have been created.
And at that point, that's the end of this demo series.
So what did you learn?
You learned that EBS volumes are created within one specific availability zone.
EBS volumes can be mounted to instances in that availability zone only and can be moved between instances while retaining their data.
You can create a snapshot from an EBS volume which is stored in S3 and that data is replicated within the region.
And then you can use snapshots to create volumes in different availability zones.
I told you how snapshots can be copied to other AWS regions either as part of data migration or disaster recovery and you learned that EBS is persistent.
You've also seen in this part of the demo series that instant store volumes can be used.
They are included with many instance types but if the instance moves between EC2 hosts so if an instance is stopped and then started or if an EC2 host has hardware problems then that EC2 instance will be moved between hosts and any data on any instant store volumes will be lost.
So that's everything that you needed to know in this demo lesson and you're going to learn much more about EC2 and EBS in other lessons throughout the course.
At this point though, thanks for watching and doing this demo.
I hope it was useful but go ahead complete this video and when you're ready I look forward to you joining me in the next.
-
-
otcabrina.weebly.com otcabrina.weebly.com
-
nd the front paws and backside of our dog
Great!
-
It is relatively easy to move from this position, especially for a 4 year old
As soon as he lets go of the dog, he will become much less stable.
-
internal rotation in the right leg
Looks like slight external rotation of the left and possible internal rotation of the right. Hard to tell from this angle.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
We just need to give this a brief moment to perform that reboot.
So just wait a couple of moments and once you have right click again, select Connect.
We're going to use EC2 instance connect again.
Make sure the user's correct and then click on Connect.
Now, if it doesn't immediately connect you to the instance, if it appears to have frozen for a couple of seconds, that's fine.
It just means that the instance hasn't completed its restart.
Wait for a brief while longer and then attempt another connect.
This time you should be connected back to the instance and now we need to verify whether we can still see our volume attached to this instance.
So do a DF space -k and press Enter and you'll note that you can't see the file system.
That's because before we rebooted this instance, we used the mount command to manually mount the file system on our EBS volume into the EBS test folder.
Now that's a manual process.
It means that while we could interact with that before the reboot, it doesn't automatically mount that file system when the instance restarts.
To do that, we need to configure it to auto-mount when the instance starts up.
So to do that, we need to get the unique ID of the EBS volume, which is attached to this instance.
And to get that, we run a shudu space blkid.
Now press Enter and that's going to list the unique identifier of all of the volumes attached to this instance.
You'll see the boot volume listed as devxvda1 and the EBS volume that we've just attached listed as devxvdf.
So we need the unique ID of the volume that we just added.
So that's the one next to xvdf.
So go ahead and select this unique identifier.
You'll need to make sure that you select everything between the speech marks and then copy that into your clipboard.
Next, we need to edit the FSTAB file, which controls which file systems are mounted by default.
So we're going to run a shudu and then space nano, which is our editor, and then a space, and then forward slash ETC, which is the configuration directory on Linux, another forward slash and then FSTAB and press Enter.
And this is the configuration file for which file systems are mounted by our instance.
And we're going to add a similar line.
So first we need to use uuid, which is the unique identifier, and then the equal symbol.
And then we need to paste in that unique ID that we just copied to our clipboard.
Once that's pasted in, press Space.
This is the ID of the EBS volume, so the unique ID.
Next, we need to provide the place where we want that volume to be mounted.
And that's the folder we previously created, which is forward slash EBS test.
Then a space, we need to tell the OS which file system is used, which is xfs, and then a space.
And then we need to give it some options.
You don't need to understand what these do in detail.
We're going to use defaults, all one word, and then a comma, and then no fail.
So once you've entered all of that, press Ctrl+O to save that file, and Enter, and then Ctrl+X to exit.
Now this will be mounted automatically when the instance starts up, but we can force that process by typing shudu space mount space-a.
And this will perform a mount of all of the volumes listed in the FS tab file.
So go ahead and press Enter.
Now if we do a df space-k and press Enter, you'll see that our EBS volume once again is mounted within the EBS test folder.
So I'm going to clear the screen, then I'm going to move into that folder, press Enter, and then do an ls space-la, and you'll see that our amazing test file still exists within this folder.
And that shows that the data on this file system is persistent, and it's available even after we reboot this EC2 instance, and that's different than instance store volumes, which I'll be demonstrating later on.
At this point, we're going to shut down this instance because we won't be needing it anymore.
So close down this tab, click on Instances, right-click on instance one-AZA, and then select Stop Instance.
You'll need to confirm it, refresh that and wait for it to move into a stopped state.
Once it has stopped, go down and click on Volumes, select the EBS test volume, right-click and detach it.
We're going to detach this volume from the instance that we've just stopped.
You'll need to confirm that, and that will begin the process and it will detach that volume from the instance, and this demonstrates how EBS volumes are completely separate from EC2 instances.
You can detach them and then attach them to other instances, keeping the data that's on that volume.
Just keep refreshing.
We need to wait for that to move into an available state, and once it has, we're going to right-click, select Attach Volume, click inside the instance box, and this time, we're going to select instance two-AZA.
It should be the only one listed now in a running state.
So select that and click on Attach.
Just refresh that and wait for that to move into an in-use state, which it is, then move back to instances, and we're going to connect to the instance that we just attached that volume to.
So select instance two-AZA, right-click, select Connect, and then connect to that instance.
Once we connected to that instance, remember this is an instance that we haven't interacted with this EBS volume with.
So this instance has no initial configuration of this EBS volume, and if we do a DF-K, you'll see that this volume is not mounted on this instance.
What we need to do is do an LS, BLK, and this will list all of the block devices on this instance.
You'll see that it's still using XVDF because this is the device ID that we configured when attaching the volume.
Now, if we run this command, so shudu, file, S, and then the device ID of this EBS volume, notice how now it shows a file system on this EBS volume because we created it on the previous instance.
We don't need to go through all of the process of creating the file system because EBS volumes persist past the lifecycle of an EC2 instance.
You can interact with an EBS volume on one instance and then move it to another and the configuration is maintained.
We're going to follow the same process.
We're going to create a folder called EBSTEST.
Then we're going to mount the EBS volume using the device ID into this folder.
We're going to move into this folder and then if we do an LS, space-LA, and press Enter, you'll see the test file that you created in the previous step.
It still exists and all of the contents of that file are maintained because the EBS volume is persistent storage.
So that's all I wanted to verify with this instance that you can mount this EBS volume on another instance inside the same availability zone.
At this point, close down this tab and then click on Instances and we're going to shut down this second EC2 instance.
So right-click and then select Stop Instance and you'll need to confirm that process.
Wait for that instance to change into a stop state and then we're going to detach the EBS volume.
So that's moved into the stopped state.
We can select Volumes, right-click on this EBSTEST volume, detach the volume and confirm that.
Now next, we want to mount this volume onto the instance that's in Availability Zone B and we can't do that because EBS volumes are located in one specific availability zone.
Now to allow that process, we need to create a snapshot.
Snapshots are stored on S3 and replicated between multiple availability zones in that region and snapshots allow us to take a volume in one availability zone and move it into another.
So right-click on this EBS volume and create a snapshot.
Under Description, just use EBSTESTSNAP and then go ahead and click on Create Snapshot.
Just close down any dialogues, click on Snapshots and you'll see that a snapshot is being created.
Now depending on how much data is stored on the EBS volume, snapshots can either take a few seconds or anywhere up to several hours to complete.
This snapshot is a full copy of all of the data that's stored on our original EBS volume.
But because the snapshot is stored in S3, it means that we can take this snapshot, right-click, create volume and then create a volume in a different availability zone.
Now you can change the volume type, the size and the encryption settings at this point, but we're going to leave everything the same and just change the availability zone from US-EAST-1A to US-EAST-1B.
So select 1B in availability zone, click on Add Tag.
We're going to give this a name to make it easier to identify.
For the value, we're going to use EBS Test Volume-AZB.
So enter that and then create the volume.
Close down any dialogues and at this point, what we're doing is using this snapshot which is stored inside S3 to create a brand new volume inside availability zone US-EAST-1B.
At this point, once the volume is in an available state, make sure you select the right one, then we can right-click, we can attach this volume and this time when we click in the instance box, you'll see the instance that's in availability zone 1B.
So go ahead and select that and click on Attach.
Once that volume is in use, go back to Instances, select the third instance, right-click, select Connect, choose Instance Connect, verify the username and then connect to the instance.
Now we're going to follow the same process with this instance.
So first, we need to list all of the attached block devices using LSBLK.
You'll see the volume we've just created from that snapshot, it's using device ID XVDF.
We can verify that it's got a file system using the command that we've used previously and it's showing an XFS file system.
Next, we create our folder which will be our mount point.
Then we mount the device into this mount point using the same command as we've used previously, move into that folder and then do a listing using LS-LA and you should see the same test file you created earlier and if you cap this file, it should have the same contents.
This volume has the same contents because it's created from a snapshot that we created of the original volume and so its contents will be identical.
Go ahead and close down this tab to this instance, select instances, right click, stop this instance and then confirm that process.
Just wait for that instance to move into the stopped state.
We're going to move back to volumes, select the EBS test volume in availability zone 1B, detach that volume and confirm it and then just move to snapshots and I want to demonstrate how you have the option of right clicking on a snapshot.
You can copy the snapshot and choose a different regions.
So as well as snapshots giving you the option of moving EBS volume data between availability zones, you can also use snapshots to copy data between regions.
Now I'm not going to do this process but I could select a different region, for example, Asia Pacific Sydney and copy that snapshot to the Sydney region.
But there's no point doing that because we just have to remember to clean it up afterwards.
That process is fairly simple and will allow us to copy snapshots between regions.
It might take some time again depending on the amount of data within that snapshot but it is a process that you can perform either as part of data migration or disaster recovery processes.
So go ahead and click on cancel and at this point we're just going to clear things up because this is the end of this first phase of this demo lesson.
So right click on this snapshot and just delete the snapshot and confirm that.
Then go to volumes, select the volume in US East 1A, right click, delete that volume and confirm.
Select the volume in US East 1B, right click, delete volume and confirm.
And that just means we've tidied up both of those EBS volumes within this account.
Now that's the end of this first stage of this set of demo lessons.
All the steps until this point have been part of the free tier within AWS.
What follows won't be part of the free tier.
We're going to be creating a larger instant size and this will have a cost attached but I want to use it to demonstrate instant store volumes and how you can interact with them and some of their unique characteristics.
So I'm going to move into a new video and this new video will have an associated charge.
So you can either watch me perform the steps or you can do it within your own environment.
Now go ahead and complete this video and when you're ready, you can move on to the next video where we're going to investigate instant store volumes.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and we're going to use this demo lesson to get some experience of working with EBS and Instant Store volumes.
Now before we get started, this series of demo videos will be split into two main components.
The first component will be based around EBS and EBS snapshots and all of this will come under the free tier.
The second component will be based on Instant Store volumes and will be using larger instances which are not included within the free tier.
So I'm going to make you aware of when we move on to a part which could incur some costs and you can either do that within your own environment or watch me do it in the video.
If you do decide to do it in your own environment, just be aware that there will be a 13 cents per hour cost for the second component of this demo series and I'll make it very clear when we move into that component.
The second component is entirely optional but I just wanted to warn you of the potential cost in advance.
Now to get started with this demo, you're going to need to deploy some infrastructure.
To do that, make sure that you're logged in to the general account, so the management account of the organization and you've got the Northern Virginia region selected.
Now attached to this demo is a one click deployment link to deploy the infrastructure.
So go ahead and click on that link.
That's going to open this quick create stack screen and all you need to do is scroll down to the bottom, check this capabilities box and click on create stack.
Now you're going to need this to be in a create complete state before you continue with this demo.
So go ahead and pause the video, wait for that stack to move into the create complete status and then you can continue.
Okay, now that's finished and the stack is in a create complete state.
You're also going to be running some commands within EC2 instances as part of this demo.
Also attached to this lesson is a lesson commands document which contains all of those commands and you can use this to copy and paste which will avoid errors.
So go ahead and open that link in a separate browser window or separate browser tab.
It should look something like this and we're going to be using this throughout the lesson.
Now this cloud formation template has created a number of resources, but the three that we're concerned about are the three EC2 instances.
So instance one, instance two and instance three.
So the next thing to do is to move across to the EC2 console.
So click on the services drop down and then either locate EC2 under all services, find it in recently visited services or you can use the search box at the top type EC2 and then open that in a new tab.
Now the EC2 console is going through a number of changes so don't be alarmed if it looks slightly different or if you see any banners welcoming you to this new version.
Now if you click on instances running, you'll see a list of the three instances that we're going to be using within this demo lesson.
We have instance one - az a.
We have instance two - az a and then instance one - az b.
So these are three instances, two of which are in availability zone A and one which is in availability zone B.
Next I want you to scroll down and locate volumes under elastic block store and just click on volumes.
And what you'll see is three EBS volumes, each of which is eight GIB in size.
Now these are all currently in use.
You can see that in the state column and that's because all of these volumes are in use as the boot volumes for those three EC2 instances.
So on each of these volumes is the operating system running on those EC2 instances.
Now to give you some experience of working with EBS volumes, we're going to go ahead and create a volume.
So click on the create volume button.
The first thing you'll need to do when creating a volume is pick the type and there are a number of different types available.
We've got GP2 and GP3 which are the general purpose storage types.
We're going to use GP3 for this demo lesson.
You could also select one of the provisioned IOPS volumes.
So this is currently IO1 or IO2.
And with both of these volume types, you're able to define IOPS separately from the size of the volume.
So these are volume types that you can use for demanding storage scenarios where you need high-end performance or when you need especially high performance for smaller volume sizes.
Now IO1 was the first type of provisioned IOPS SSD introduced by AWS and more recently they've introduced IO2 and enhanced it which provides even higher levels of performance.
In addition to that we do have the non-SSD volume types.
So SC1 which is cold HDD, ST1 which is throughput optimized HDD and then of course the original magnetic type which is now legacy and AWS don't recommend this for any production usage.
For this demo lesson we're going to go ahead and select GP3.
So select that.
Next you're able to pick a size in GIB for the volume.
We're going to select a volume size of 10 GIB.
Now EBS volumes are created within a specific availability zone so you have to select the availability zone when you're creating the volume.
At this point I want you to go ahead and select US-EAST-1A.
When creating volume you're also able to specify a snapshot as the basis for that volume.
So if you want to restore a snapshot into this volume you can select that here.
At this stage in the demo we're going to be creating a blank EBS volume so we're not going to select anything in this box.
We're going to be talking about encryption later in this section of the course.
You are able to specify encryption settings for the volume when you create it but at this point we're not going to encrypt this volume.
We do want to add a tag so that we can easily identify the volume from all of the others so click on add tag.
For the key we're going to use name.
For the value we're going to put EBS test volume.
So once you've entered both of those go ahead and click on create volume and that will begin the process of creating the volume.
Just close down any dialogues and then just pay attention to the different states that this volume goes through.
It begins in a creating state.
This is where the storage is being provisioned and then made available by the EBS product.
If we click on refresh you'll see that it changes from creating to available and once it's in an available state this means that we can attach it to EC2 instances.
And that's what we're going to do so we're going to right click and select attach volume.
Now you're able to attach this volume to EC2 instances but crucially only those in the same availability zone.
EBS is an availability zone scoped service and so you can only attach EBS volumes to EC2 instances within that same availability zone.
So if we select the instance box you'll only see instances in that same availability zone.
Now at this point go ahead and select instance 1 in availability zone A.
Once you've selected it you'll see that the device field is populated and this is the device ID that the instance will see for this volume.
So this is how the volume is going to be exposed to the EC2 instance.
So if we want to interact with this instance inside the operating system this is the device that we'll use.
Now different operating systems might see this in slightly different ways.
So as this warning suggests certain Linux kernels might rename SDF to XVDF.
So we've got to be aware that when you do attach a volume to an EC2 instance you need to get used to how that's seen inside the operating system.
How we can identify it and how we can configure it within the operating system for use.
And I'm going to demonstrate that in the next part of this demo lesson.
So at this point just go ahead and click on attach and this will attach this volume to the EC2 instance.
Once that's attached to the instance and you see the state change to in use then just scroll up on the left hand side and select instances.
We're going to go ahead and connect to instance 1 in availability zone A.
This is the instance that we just attached that EBS volume to so we want to interact with this instance and see how we can see the EBS volume.
So right click on this instance and select connect and then you could either connect with an SSH client or use instance connect and to keep things simple we're going to connect from our browser so select the EC2 instance connect option make sure the user's name is set to EC2-user and then click on connect.
So now we connected to this EC2 instance and it's at this point that we'll start needing the commands that are listed inside the lesson commands document and again this is attached to this lesson.
So first we need to list all the block devices which are connected to this instance and we're going to use the LSBLK command.
Now if you're not comfortable with Linux don't worry just take this nice and slowly and understand at a high level all the commands that we're going to run.
So the first one is LSBLK and this is list block devices.
So if we run this we'll be able to see a list of all of the block devices connected to this EC2 instance.
You'll see the root device this is the device that's used to boot the instance it contains the instance operating system you'll see that it's 8 gig in size and then this is the EBS volume that we just attached to this instance.
You'll see that device ID so XVDF and you'll see that it's 10 gig in size.
Now what we need to do next is check whether there is a file system on this block device.
So this block device we've created it with EBS and then we've attached it to this instance.
Now we know that it's blank but it's always safe to check if there's any file system on a block device.
So to do that we run this command.
So we're going to check are there any file systems on this block device.
So press enter and if you see just data that indicates that there isn't any file system on this device and so we need to create one.
You can only mount file systems under Linux and so we need to create a file system on this raw block device this EBS volume.
So to do that we run this command.
So shoo-doo again is just giving us admin permissions on this instance.
MKFS is going to make a file system.
We specify the file system type with hyphen t and then XFS which is a type of file system and then we're telling it to create this file system on this raw block device which is the EBS volume that we just attached.
So press enter and that will create the file system on this EBS volume.
We can confirm that by rerunning this previous command and this time instead of data it will tell us that there is now an XFS file system on this block device.
So now we can see the difference.
Initially it just told us that there was data, so raw data on this volume and now it's indicating that there is a file system, the file system that we just created.
Now the way that Linux works is we mount a file system to a mount point which is a directory.
So we're going to create a directory using this command.
MKDIR makes a directory and we're going to call the directory forward slash EBS test.
So this creates it at the top level of the file system.
This signifies root which is the top level of the file system tree and we're going to make a folder inside here called EBS test.
So go ahead and enter that command and press enter and that creates that folder and then what we can do is to mount the file system that we just created on this EBS volume into that folder.
And to do that we use this command, mount.
So mount takes a device ID, so forward slash dev forward slash xvdf.
So this is the raw block device containing the file system we just created and it's going to mount it into this folder.
So type that command and press enter and now we have our EBS volume with our file system mounted into this folder.
And we can verify that by running a df space hyphen k.
And this will show us all of the file systems on this instance and the bottom line here is the one that we've just created and mounted.
At this point I'm just going to clear the screen to make it easier to see and what we're going to do is to move into this folder.
So cd which is change directory space forward slash EBS test and then press enter and that will move you into that folder.
Once we're in that folder we're going to create a test file.
So we're going to use this command so shudu nano which is a text editor and we're going to call the file amazing test file dot txt.
So type that command in and press enter and then go ahead and type a message.
It can be anything you just need to recognize it as your own message.
So I'm going to use cats are amazing and then some exclamation marks.
Then I'm going to press control o and enter to save that file and then control x to exit again clear the screen to make it easier to see.
And then I'm going to do an LS space hyphen LA and press enter just to list the contents of this folder.
So as you can see we've now got this amazing test file dot txt.
And if we cat the contents of this so cat amazing test file dot txt you'll see the unique message that you just typed in.
So at this point we've created this file within the folder and remember the folder is now the mount point for the file system that we created on this EBS volume.
So the next step that I want you to do is to reboot this EC2 instance.
To do that type sudo space and then reboot and press enter.
Now this will disconnect you from this session.
So you can go ahead and close down this tab and go back to the EC2 console.
Just go ahead and click on instances.
Okay, so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead complete the video and when you're ready join me in part two.
-
-
medium.com medium.com
-
Effective collaboration is essential for mutual learning.
for - Deep Humanity - intertwingled individual / collective learning - evolutionary learning journey - symmathesy - mutual learning - Nora Bateson
-
preliminary ground-setting
for - co-creative collaboration - preliminary groundwork
comment - How many times have I seen people come together with good intention to collaborate on some meaningful project onlyl for the project to fall apart some time later due to differences that emerge later on? - Without laying the proper framework for engagement and conflict resolution, we cannot prevent future conflicts from emerging - What is that proper framework? - What variables bring people closer together? - What variables drive people further apart? - We must identify those variables. They are complex because each one of us see's reality from our own unique perspective
-
for - Medium article - co-creative collaboration - Donna Nelham
summary - Donna takes us on a deep dive into the word collaboration what is needed to forge deep and meaningful collaboration and why it often fails - She introduces the term "collaboration washing" (like green washing) into our lexicon - This article is provocation for deep dive into what it means to collaborate - The questions we ask ourselves will lead us back to the most fundamental philosophical questions of self and other and how we formed these
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to evolve the infrastructure which you've been using throughout this section of the course.
In this demo lesson you're going to add private internet access capability using NAT gateways.
So you're going to be applying a cloud formation template which creates this base infrastructure.
It's going to be the animals for life VPC with infrastructure in each of three availability zones.
So there's a database subnet, an application subnet and a web subnet in availability zone A, B and C.
Now to this point what you've done is configured public subnet internet access and you've done that using an internet gateway together with routes on these public subnets.
In this demo lesson you're going to add NAT gateways into each availability zone so A, B and C and this will allow this private EC2 instance to have access to the internet.
Now you're going to be deploying NAT gateways into each availability zone so that each availability zone has its own isolated private subnet access to the internet.
It means that if any of the availability zones fail then each of the others will continue operating because these route tables which are attached to the private subnets they point at the NAT gateway within that availability zone.
So each availability zone A, B and C has its own corresponding NAT gateway which provides private internet access to all of the private subnets within that availability zone.
Now in order to implement this infrastructure you're going to be applying a one-click deployment and that's going to create everything that you see on screen now apart from these NAT gateways and the route table configurations.
So let's go ahead and move across to our AWS console and get started implementing this architecture.
Okay so now we're at the AWS console as always just make sure that you're logged in to the general AWS account as the I am admin user and you'll need to have the Northern Virginia region selected.
Now at the end of the previous demo lesson you should have deleted all of the infrastructure that you've created up until that point so the animals for live VPC as well as the Bastion host and the associated networking.
So you should have a relatively clean AWS account.
So what we're going to do first is use a one-click deployment to create the infrastructure that we'll need within this demo lesson.
So attached to this demo lesson is a one-click deployment link so go ahead and open that link.
That's going to take you to a quick create stack screen.
Everything should be pre-populated the stack name should be a4l just scroll down to the bottom check this capabilities box and then click on create stack.
Now this will start the creation process of this a4l stack and we will need this to be in a create complete state before we continue.
So go ahead pause the video wait for your stack to change into create complete and then we good to continue.
Okay so now this stacks moved into a create complete state then we good to continue.
So what we need to do before we start is make sure that all of our infrastructure has finished provisioning.
To do that just go ahead and click on the resources tab of this cloud formation stack and look for a4l internal test.
This is an EC2 instance a private EC2 instance so this doesn't have any public internet connectivity and we're going to use this to test on that gateway functionality.
So go ahead and click on this icon under physical ID and this is going to move you to the EC2 console and you'll be able to see this a4l - internal - test instance.
Now currently in my case it's showing as running but the status check is showing as initializing.
Now we'll need this instance to finish provisioning before we can continue with the demo.
What should happen is this status check should change from initializing to two out of two status checks and once you're at that point you should be able to right click and select connect and choose session manager and then have the option of connecting.
Now you'll see that I don't because this instance hasn't finished its provisioning process.
So what I want you to do is to go ahead and pause this video wait for your status checks to change to two out of two checks and then just go ahead and try to connect to this instance using session manager.
Only resume the video once you've been able to click on connect under the session manager tab and don't worry if this takes a few more minutes after the instance finishes provisioning before you can connect to session manager.
So go ahead and pause the video and when you can connect to the instance you're good to continue.
Okay so in my case it took about five minutes for this to change to two out of two checks past and then another five minutes before I could connect to this EC2 instance.
So I can right click on here and put connect.
I'll have the option now of picking session manager and then I can click on connect and this will connect me in to this private EC2 instance.
Now the reason why you're able to connect to this private instance is because we're using session manager and I'll explain exactly how this product works elsewhere in the course but essentially it allows us to connect into an EC2 instance with no public internet connectivity and it's using VPC interface endpoints to do that which I'll be explaining elsewhere in the course but what you should find when you're connected to this instance if you try to ping any internet IP address so let's go ahead and type ping and then a space 1.1.1.1.1 and press enter you'll note that we don't have any public internet connectivity and that's because this instance doesn't have a public IP version for address and it's not in a subnet with a route table which points at the internet gateway.
This EC2 instance has been deployed into the application a subnet which is a private subnet and it also doesn't have a public IP version for address.
So at this point what we need to do is go ahead and deploy our NAT gateways and these NAT gateways are what will provide this private EC2 instance with connectivity to the public IP version for internet so let's go ahead and do that.
Now to do that we need to be back at the main AWS console click in the services search box at the top type VPC and then right click and open that in a new tab.
Once you do that go ahead and move to that tab once you there click on NAT gateways and create a NAT gateway.
Okay so once you're here you'll need to specify a few things you'll need to give the NAT gateway a name you'll need to pick a public subnet for the NAT gateway to go into and then you'll need to give the NAT gateway an elastic IP address which is an IP address which doesn't change.
So first we'll set the name of the NAT gateway and we'll choose to use a4l for animals for life -vpc1 -natgw and then -a because this is going into availability zone A.
Next we'll need to pick the public subnet that the NAT gateway will be going into so click on the subnet drop down and then select the web a subnet which is the public subnet in availability zone a so sn -web -a.
Now we need to give this NAT gateway an elastic IP it doesn't currently have one so we need to click on allocate elastic IP which gives it an allocation.
Don't worry about the connectivity type we'll be covering that elsewhere in the course just scroll down to the bottom and create the NAT gateway.
Now this process will take some time and so we need to go ahead and create the two other NAT gateways.
So click on NAT gateways at the top and then we're going to create a second NAT gateway.
So go ahead and click on create NAT gateway again this time we'll call the NAT gateway a4l -vpc1 -natgw -b and this time we'll pick the web b subnet so sn -web -b allocated elastic IP again and click on create NAT gateway then we'll follow the same process a third time so click create NAT gateway use the same naming scheme but with -c pick the web c subnet from the list allocate an elastic IP and then scroll down and click on create NAT gateway and at this point we've got the three NAT gateways that are being created they're all in appending state if we go to elastic IPs we can see the three elastic IPs which have been allocated to the NAT gateways and we can scroll to the right or left and see details on these IPs and if we wanted we could release these IPs back to the account once we'd finish with them now at this point you need to go ahead and pause the video and resume it once all three of those NAT gateways have moved away from appending state we need them to be in an available state ready to go before we can continue with this demo so go ahead and pause and resume once all three have changed to an available state okay so all these are now in an available state so that means they're good to go they're providing service now if you scroll to the right in this list you're able to see additional information about these NAT gateways so you can see the elastic and private IP address the VPC and then the subnet that each of these NAT gateways are located in what we need to do now is configure the routing so that the private instances can communicate via the NAT gateways so right click on route tables and open in a new tab and we need to create a new route table for each of the availability zones so go ahead and click on create route table first we need to pick the VPC for this route table so click on the VPC drop down and then select the animals for live VPC so a for L hyphen VPC one once selected go ahead and name at the route table we're going to keep the naming scheme consistent so a for L hyphen VPC one hyphen RT for route table hyphen private a so enter that and click on create then close that dialogue down and create another route table this time we'll use the same naming scheme but of course this time it will be RT hyphen private B select the animals for life VPC and click on create close that down and then finally click on create route table again this time a for L hyphen VPC one hyphen RT hyphen private C again click on the VPC drop down and select the animals for life VPC and then click on create so that's going to leave us with three route tables one for each availability zone what we need to do now is create a default route within each of these route tables and that route is going to point at the NAT gateway in the same availability zone so select the route table private a and then click on the routes tab once you've selected the routes tab click on edit routes and we're going to add a new route it's going to be the IP version for default route of 0.0.0.0/0 and then click on target and pick NAT gateway and we're going to pick the NAT gateway in availability zone a and because we named them it makes it easy to select the relevant one from this list so go ahead and pick a for L hyphen VPC one hyphen NAT GW hyphen a so because this is the route table in availability zone a we need to pick the same NAT gateway so save that and close and now we'll be doing the same process for the route table in availability zone B make sure the routes tab is selected and click on edit routes click on add route again 0.0.0.0/0 and then for target pick NAT gateway and then pick the NAT gateway that's in availability zone B so NAT GW hyphen B once you've done that save the route table and then next select the route table in availability zone C so select RT hyphen private C make sure the routes tab is selected and click on edit routes again we'll be adding a route it will be the IP version for default route so 0.0.0.0/0 select a target go to NAT gateway and pick the NAT gateway in availability zone C so NAT GW hyphen C once you've done that save the route table and now our private EC2 instance should be able to ping 1.1.1.1 because we have the routing infrastructure in place so let's move back to our private instance and we can see that it's not actually working now the reason for this is that although we have created these routes we haven't actually associated these route tables with any of the subnets subnets in a VPC which don't have an explicit route table association are associated with the main route table now we need to explicitly associate each of these route tables with the subnets inside that same AZ so let's go ahead and pick RT hyphen private A we'll go through in order so select it click on the subnet associations tab and edit subnet associations and then you need to pick all of the private subnets in AZ A so that's the reserved subnet so reserved hyphen A the app subnet so app hyphen A and the DB subnet so DB hyphen A so all of these are the private subnets in availability zone A notice how all the public subnets are associated with this custom route table you created earlier but the ones we're setting up now are still associated with the main route table so we're going to resolve that now by associating this route table with those subnets so click on save and this will associate all of the private subnets in AZ A with the AZ A route table so now we're going to do the same process for AZ B and AZ C and we'll start with AZ B so select the private B route table click on subnet associations edit subnet associations so select application B database B and then reserved B and then scroll down and save the associations and then select the private C route table click on subnet associations edit subnet associations and then select reserved C database C and then application C and then scroll down and save those associations and now that we've associated these route tables with the subnets and now that we've added those default routes if we go back to session manager where we still have the connection open to the private EC2 instance we should see that the ping has started to work and that's because we now have a NAT gateway providing service to each of the private subnets in all of the three availability zones okay so that's everything you needed to cover in this demo lesson now it's time to clean up the account and return it to the same state as it was at the start of this demo lesson from this point on within the course you're going to be using automation and so we can remove all the configuration that we've done inside this demo lesson so the first thing we need to do is to reverse the route table changes that we've done so we need to go ahead and select the RT hyphen private a route table go ahead and select subnet associations and then edit the subnet associations and then just uncheck all of these subnets and this will return these to being associated with the main route table so scroll down and click on save do the same for RT hyphen private be so deselect all of these associations and click on save and then the same for RT hyphen private see so select it go to subnet associations and then edit them and remove all of these subnets and click on save next select all of these private route tables these are the ones that we created in this lesson so select them all click on the actions drop down and then delete route table and confirm by clicking delete route tables go to NAT gateways on the left and we need to select each of the NAT gateways in turn so a and then click on actions and delete NAT gateway type delete click delete then select be and do the same process actions delete NAT gateway type delete click delete and finally the same for see so select the C NAT gateway click on actions and delete NAT gateway you'll need to type delete to confirm click on delete now we're going to need all of these to be in a fully deleted state before we can continue so hit refresh and make sure that all three NAT gateways are deleted if yours aren't deleted if they're still listed in a deleting state then go ahead and pause the video and resume once all of these have changed to deleted at this point all of the NAT gateways have deleted so you can go ahead and click on elastic IPs and we need to release each of these IPs so select one of them and then click on actions and release elastic IP addresses and click release and do the same process for the other two click on release then finally actions release IP click on release once that's done move back to the cloud formation console select the stack which was created by the one click deployment at the start of the lesson and click on delete and then confirm that deletion and that will remove the cloud formation stack and any resources created as part of this demo and at that point once that finishes deleting the account has been returned into the same state as it was at the start of this demo lesson so I hope this demo lesson has been useful just to reiterate what you've done you've created three NAT gateways for a region resilient design you've created three route tables one in each availability zone added a default IP version for route pointing at the corresponding NAT gateway and associated each of those route tables with the private subnets in those availability zones so you've implemented a regionally resilient NAT gateway architecture so that's a great job that's a pretty complex demo but it's going to be functionality that will be really useful if you're using AWS in the real world or if you have to answer any exam questions on NAT gateways with that being said at this point you have cleared up the account you've deleted all the resources so go ahead complete this video and when you're ready I'll see you in the next.
-
-
arxiv.org arxiv.org
-
Data construction prompt. Fig. 6 shows theprompt used for Chinese distillation data construc-tion. We follow Zhou et al. (2024) to design theprompt for Chinese data construction. We adoptthe data construction prompt of Pile-NER-type 3,since it shows the best performance as in (Zhouet al., 2024).Figure 6: Data construction prompt for Chinese opendomain NER.Data processing. Following (Zhou et al., 2024),we chunk the passages sampled from the Sky cor-pus4 to texts of a max length of 256 tokens andrandomly sample 50K passages. Due to limitedcomputation resources, we sample the first twentyfiles in Sky corpus for data construction, since thesize of the entire Sky corpus is beyond the pro-cessing capability of our machines. We conductthe same data processing procedures including out-put filtering and negative sampling as in UniNER.Specifically, the negative sampling strategy for en-tity types, is applied with a probability proportionalto the frequency of entity types in the entire con
Qúa trình xây dựng dữ liệu Sky-NER (Open NER tiếng Trung): - Xây dựng prompt: Dựa trên chiến lược của bài UniversalNER. - Xử lý dữ liệu: Thu thập dữ liệu bằng cách cắt đoạn văn trong sky-scorpus thành các đoạn văn bản có độ dài tối đa là 256 token và chọn ra ngẫu nhiên 50K đoạn văn.
-
ference with out-domain examples. Duringinference, since examples from the automaticallyconstructed data is not aligned with the domainsand schemas of the human-annotated benchmarks,we refer to them as out-domain examples. Fig. 4shows the results of inference with out-domain ex-amples using diverse retrieval strategies. We usethe model trained with NN strategy here. After ap-plying example filtering such as BM25 scoring, in-ference with out-domain examples shows improve-ments compared to the baseline, suggesting theneed of example filtering when implementing RAGwith out-domain examples
Qúa trình infer với các mẫu out-domain: Trong quá trình infer, bởi vì các mẫu từ tập dữ liệu xây dựng tự động có domain và format không giống với dữ liệu được gán nhãn bởi con người, các mẫu này sẽ được gọi là out-domain.
Trong thực nghiệm ở hình 4, mô hình RA-IT được huấn luyện với chiến lược truy xuất NN. Sau khi áp dụng bộ lọc BM25, việc infer với các mẫu out-domain cho thấy cải thiện so với baseline, từ đó cho thấy tầm quan trọng trong việc thêm bộ lọc khi áp dụng RAG với các mẫu out-domain.
-
Training with diverse retrieval strategies. Fig.3 visualize the results of training with various re-trieval strategies. We conduct inference with andwithout examples for each strategy, and set the re-trieval strategy of inference the same as of training.The most straight forward method NN shows bestperformances, suggesting the benefits of semanti-cally similar examples. Random strategy, though in-Figure 4: Impacts of inferece with out-domain examplesusing various retrieval strategies. The average F1 valueof the evaluated benchmarks are reported. w/o exmp.means inference without example. Applying examplefiltering strategy such as BM25 filtering benefits RAGwith out-domain examples.Figure 5: Impacts of inference with in-domain examples.The average F1 value of the evaluated benchmarks arereported. N -exmp. means the example pool of size N .Sufficient in-domain examples are helpful for RAG.ferior to NN, also shows improvements, indicatingthat random examples might introduce some gen-eral information of NER taks to the model. Mean-while, inference with examples does not guaranteeimprovements and often hurt performances. Thismay due to the differences of the annotation schemabetween the automatically constructed data and thehuman-annotated benchmarks
Huấn luyện với các chiến lược truy xuất khác nhau: Được thể hiện ở hình 3. Qúa trình infer được tiến hành có hoặc không có các mẫu tham khảo với mỗi chiến lược trích xuất, và chiến lược trích xuất trong cả quá trình huấn luyện và quá trình infer là giống nhau.
Kết quả cho thấy NN là chiến lược truy xuất tốt nhất, từ đó cho thấy tầm quan trọng của các mẫu tham khảo có sự tương đồng về mặt ngữ nghĩa. Trong khi đó, việc infer với các ví dụ thì không đảm bảo sự tăng tiến và thường ảnh hưởng tiêu cực đến kết quả.
-
Diverse retrieval strategies. The followingstrategies are explored in the subsequent analysis.(1) Nearest neighbor (NN), the strategy used in themain experiments, retrieves k nearest neighborsof the current sample. (2) Nearest neighbor withBM25 filter (NN, BM), where we apply BM25 scor-ing to filters out NN examples not passing a prede-fined threshold. Samples with no satisfied exam-ples are used with the vanilla instruction template.(3) Diverse nearest neighbor (DNN), retrieves Knearest neighbors with K >> k and randomly se-lects k examples from them. (4) Diverse nearestwith BM25 filter (DNN,BM), filters out DNN exam-ples not reaching the BM25 threshold. (5) Random,uniformly selects k random examples. (6) Mixednearest neighbors (MixedNN), mixes the using ofthe NN and random retrieval strategies with theratio of NN set to a.
Các chiến lược truy xuất chính: - Nearest neighbor (NN): Chiến lược được sử dụng trong các thực nghiệm chính, có khả năng trích xuất ra k mẫu gần với mẫu cần truy xuất nhất. - NN với bộ lọc BM25 (NN, BM): bộ lọc BM25 được sử dụng để lọc các mẫu NN có độ tương đồng ko vượt qua 1 ngưỡng nhất định - NN đa dạng: truy xuất K mẫu NN với K >> k, sau đó chọn ngẫu nhiên k mẫu trong K mẫu NN trên. - Random - NN hỗn hợp:Sử dụng kết hợp NN và các chiến lược chọn ngẫu nhiên với tỉ lệ chọn của NN là alpha
-
We explore the impacts of diverse retrieval strate-gies. We conduct analysis on 5K data size for costsaving as the effect of RA-IT is consistent acrossvarious data sizes as shown in Section 3.4. Wereport the average results of the evaluated bench-marks here
Phân tích: Phân tích này được thực hiện để khám phá mức độ ảnh hưởng của các chiến lược truy xuất khác nhau. Phân tích được tiến hành với mẫu dữ liệu có kích thước 5K.
-
The main results are summarized in Table 1 and2 respectively. We report the results of inferencewithout examples for RA-IT here, since we foundthis setting exhibits more consistent improvements.The impacts of inference with examples are studiedin Section 3.5. As shown in the tables, RA-ITshows consistent improvements on English andChinese across various data sizes. This presumablybecause the retrieved context enhance the model
Kết quả chính: Được thể hiện ở bảng 1 và bảng 2. Chú ý rằng, thực nghiệm trong 2 bảng này đã thực hiện quá trình infer mà không có few-shot, lý do bởi việc infer này đem lại sự tăng tiến bền vững trong kết quả.
Kết quả cho thấy RA-IT đạt kết quả tốt nhất. Lý do cho sự tăng tiến này được cho là nhờ ngữ cảnh được truy xuất đã làm tăng cường khả năng hiểu đầu vào của mô hình, từ đó thể hiện sự cần thiết của các mẫu instruction có tăng cường ngữ cảnh.
-
We conduct a preliminary study on IT data effi-ciency in targeted distillation for open NER byexploring the impact of varous datas sizes: [0.5K,1K, 5K, 10K, 20K, 30K, 40K, 50K]. We use vanillaIT for preliminary study. Results are visualized inFig. 2. The following observations are consistentin English and Chinese: (1) a small data size al-ready surpass ChatGPT’s performances. (2) Perfor-mances are improving as the data sizes increased to10K or 20K, but begin to decline and then remainat a certain level as data sizes further increased to50K. Recent work for IT data selection, Xia et al.Figure 2: Preliminary study of IT data efficiency foropen NER in English (left) and Chinese (right) scenar-ios, where the training data are Pile-NER and Sky-NERrespectively. Average zero-shot results of evaluatedbenchmarks are illustrated. The performance does notnecessarily improve as the data increases.(2024); Ge et al. (2024); Du et al. (2023) also findthe superior performances of only limited data size.We leave selecting more beneficial IT data for IEas future work. Accordingly, we conduct mainexperiments on 5K, 10K and 50K data sizes
Nghiên cứu chuẩn bị cho đánh giá hiệu quả của dữ liệu: Nghiên cứu chuẩn bị được tiến hành cho việc đánh giá hiệu quả của bộ dữ liệu IT trong việc chiết xuất có mục tiêu của bài toán open NER bằng cách khám phá mức độ ảnh hưởng của dữ liệu ở nhiều kích thước khác nhau: [0.5K, 1K, 5K,...]. Mẫu IT đơn thuần được sử dụng để thực hiện nghiên cứu này.
Các kết luận rút ra: - Một lượng nhỏ dữ liệu đã có thể vượt qua được khả năng của chatGPT. - Kết quả có sự tăng tiến thuận theo độ tăng của kích thước mô hình (từ 10K lên 20K), nhưng bắt đầu giảm và ổn định ở một mức cụ thể khi dữ liệu tiếp tục tăng đến mức 50k. Các nghiên cứu gần đây về việc chọn dữ liệu IT cũng cho ra kết quả việc trội của bộ dữ liệu nhỏ có kích thước hạn chế.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other
I find it scary that these platforms monitor every move we make on there sites especially them checking our direct messages with others. Our direct messages aren't as private as we think if these platforms are sitting there collecting this data.
-
-
www.theguardian.com www.theguardian.com
-
Der Stress, dem die Wassersysteme der Welt ausgesetzt sind, wird dazu führen, dass das 2030 die Nachfrage nach Wasser 40% höher sein wird als das Angebot. Der Bericht der Globalen Komission für die Wasserökonomie stellt fest, dass ohne radikale Gegenmaßnahmen die Hälfte der Nahrungsproduktion der Welt in den kommenden 25 Jahren gefährdet ist. Trotz der Verbundenheit der globalen Wasserressourcen werde Wasser noch nicht als globales Gemeingut gemanagt. https://www.theguardian.com/environment/2024/oct/16/global-water-crisis-food-production-at-risk
-
-
-
Noch nie ist die CO2-Konzentration in der Atmosphäre so stark gestiegen wie im vergangenen Jahr, nämlich um 3,37 parts per million (PPM). Die Konzentration liegt jetzt bei 422 PPM. Vor allem die sehr geringe CO2-Aufnahme durch Ozean- und Landsenken hat diese Steigerung verursacht https://taz.de/Hiobsbotschaft-fuers-Klima/!6040258/
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
t is likely that you have more in common with that reality TV star than you care to admit. We tend to focus on personality traits in others that we feel are important to our own personality. What we like in ourselves, we like in others, and what we dislike in ourselves, we dislike in others (McCornack, 2007). If you admire a person’s loyalty, then loyalty is probably a trait that you think you possess as well. If you work hard to be positive and motivated and suppress negative and unproductive urges within yourself, you will likely think harshly about those negative traits in someone else. After all, if you can suppress your negativity, why can’t they do the same? This way of thinking isn’t always accurate or logical, but it is common.
To me this has never even registered in my head. I am going to focus on this the next time my girlfriend is watching reality tv. I know that I am most aware that I tend to root for the underdogs in most scenarios. I want the one who was counted out to win. I wonder how that relates to my personality. I know I always admire the extroverts, but I felt like that was because I am not very extroverted and wanted to be like them. Intersting self observation for me to try in the coming days.
-
his simple us/them split affects subsequent interaction, including impressions and attributions. For example, we tend to view people we perceive to be like us as more trustworthy, friendly, and honest than people we perceive to be not like us (Brewer, 1999).
I am currently working on a construction site here in Boise. I am from Tennessee and all my coworkers are from Kentucky. One day a coworker told me the superindentent didnt like me. Obviously confused since we had only been working together for 3 days, I asked, Why? My coworker told me simply for the fact that I am not from Kentucky, he did not trust me or think I was a capable worker because of where I grew up. I know its not fair but the only thing I can do is prove him wrong and help him recognize his inherant bias is not always correct.
-
-
pmc.ncbi.nlm.nih.gov pmc.ncbi.nlm.nih.gov
-
In conclusion, it is important that primary care physicians get well versed with the future AI advances and the new unknown territory the world of medicine is heading toward.
The conclusion summarizes how physicians should get used to AI because it will soon be a big part of their work.
-
Some studies have been documented where AI systems were able to outperform dermatologists in correctly classifying suspicious skin lesions.[18] This because AI systems can learn more from successive cases and can be exposed to multiple cases within minutes, which far outnumber the cases a clinician could evaluate in one mortal lifetime.
This shows that AI can also take jobs as away as well as male them better.
-
. In conclusion, the physicians who used documentation support such as dictation assistance or medical scribe services engaged in more direct face time with patients than those who did not use these services
This shows that physicians using AI save more time and are able to interact with patients more.
-
The Da Vinci robotic surgical system developed by Intuitive surgicals has revolutionized the field of surgery especially urological and gynecological surgeries.
This paragraph show how AI is being used in surgery. Robots are mimicking surgeons to perform surgery.
-
Radiology is the branch that has been the most upfront and welcoming to the use of new technology.
This paragraph talks about how Radiology is using AI. Radiology uses AI to help identify abnormal and normal scans more quickly, especially in busy hospitals with fewer staff.
-
A lot of AI is already being utilized in the medical field, ranging from online scheduling of appointments, online check-ins in medical centers, digitization of medical records, reminder calls for follow-up appointments and immunization dates for children and pregnant females to drug dosage algorithms and adverse effect warnings while prescribing multidrug combinations.
This shows the different ways medicine is being utilized in medicine
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women
This can be highly problematic as the employees would basically be logged onto your accounts and can even view your posts which are on a privacy setting "only-me". This reminds me of how someone I know was mistreated by their manager and they had an issue over their wages so right before giving in her resignation letter she leaked the company's database by posting it on Twitter, which included budgeting and the balance sheet.
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
Listening to people who are different from us is a key component of developing self-knowledge. This may be uncomfortable, because our taken-for-granted or deeply held beliefs and values may become less certain when we see the multiple perspectives that exist.
Listening to the thoughts and opinions of people with differing cultures or political opinions with the intention to understand, instead of respond, is such a powerful tool. It can help dismantle prejudices, make you a better advocate for your own values, and/or help practice giving people room to communicate what they really intending to say rather than giving preloaded responses. I think most people would benefit greatly from engaging in this kind of practice on a regular basis.
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
Self-discrepancy theory states that people have beliefs about and expectations for their actual and potential selves that do not always match up with what they actually experience (Higgins, 1987).
I have experienced this kind expectation to reality relationship in some of my personal relationships. These people had an idea of what they could be if they could just stop being inadequate that only served to generate shame and guilt. Often, there was never any real grounding for the things they expected of themselves, but they felt the weight of those expectations as if they were an undeniable reflection of their potential. I am sure many of this is related to external social expectations that are later internalized. These expectations seem to rarely serve as drivers for someone to be more productive and more often seem to break people down and make them overall less likely to engage with life.
-
If a man wants to get into better shape and starts an exercise routine, he may be discouraged by his difficulty keeping up with the aerobics instructor or running partner and judge himself as inferior, which could negatively affect his self-concept.
One of our recent lectures identified the importance of an improvement mindset. Tools like these could help avoid developing unrealistic expectations that ultimately dissuade attempts at self improvement. They could provide an interpretive lens to contextualize feedback in ways that are more constructive.
-