- Oct 2024
-
Local file Local file
-
Counterfeiting
*The illegal importing of fake products disguising as certain brands
- A subset of this crime is currency counterfeiting, which involves the illegal importing of fake money
-
Money Laundering
- The process wherein the illegal acquisition of money is disguised to conceal its illicit origins and thus becomes "legitimate"
-
-
bookshelf.vitalsource.com bookshelf.vitalsource.com
-
When we compare men who do and do not work outside the home, we are typically studying the effect of unemployment on health. This may explain why we often find greater benefits of paid work for men than for women. When we compare women who do and do not work outside the home, we are comparing employed women to two groups of nonemployed women—unemployed women, and women who choose not to work outside the home. The two groups are not the same.
This finding is really interesting to me, as I’ve never thought about the difference in groups. While men don’t usually have an example of doing non-paid work as a full time job (like raising a child and tending to the house), women do, and do not think of themselves as unemployed. I do still want to point out that it is a changing standard that men do not hold this role, as there is an emerging group of men who are working as caregivers for their families, rather than in paid work. Still, the generalization the book made is not an incorrect one, and very intriguing to me.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
It has been host to white-supremacist, neo-nazi and other hate content
This is one reason I have found that has pushed me away from libertarian ideas. Oftentimes, I agree with the sentiment of a group but the members who support that group reflect much more on the group than the actual principles and thus when people like white-supremacists are your colleagues you know you are on the wrong side.
-
-
www.sjsu.edu www.sjsu.edu
-
We don't know exactly when paper was first used in China but evidence from archaeological records indicate that it was prior to the first century AD. In China, the most common source for paper was the bark of the paper mulberry tree (Gies & Gies 1994 p 182). Paper is made by tearing up rags, or the bark of trees or certain grasses and putting them into a large pot with water. The mixture is boiled and the pulplike mass that lies at the bottom is removed, strained and spread out to dry. Originally, paper was rough but, eventually over time, the making of paper was improved with sizing, dyes, and the use of molds made out of bamboo strips. Paper was first used by the Chinese for wrapping and it was not until the 3rd century that paper replaced bamboo, silk, and wood as a writing medium.
The whole idea inn how paper was first used was interesting to me we don't really here that now days
-
Old Silk Road
The silk road was a way to trade different imported goods
-
There is not doubt that the Chinese invented gunpowder.
Gunpowder is an important military advancement that has allowed us to create better military strategies.
-
paper is one of the Chinese technologies that we can trace in its transfer to Western Europe.
Paper is an advancement that we use everyday and will continue for the future.
-
-
www.sjsu.edu www.sjsu.edu
-
One of the most important developments in the Middle Ages was the experimentation and developments in iron production. As noted by Bert Hall in his essay, "Iron is one of the most useful metals ever discovered, but it is also one of the more difficult metals to understand in history, especially in medieval history. Iron comes in several forms, and the complications involved in producing each of them fosters further confusion."
I didn't know about this experimentation and development.
-
One of the most elaborate clock towers was built by Su Sung and his associates in 1088 A.D.
its interesting to know the clock towers werre built and b whoom
-
However, the most significant difference between the clock and other machines was in its effect on society.
Societal impacts are very important because society can either like or hate a technological invention.
-
-
x.com x.com
-
Bill Gates talking about how he wants an AI program that bans harmful speech...that caused people to not get a vaccination... They want to take your freedom AND they want to inject you with poison to k!ll you.
This is an example of Dis-Information. Dis-information should have false content, fabricated content, and manipulation content. This post of X has all of these things. This account is providing false information regarding vaccines and that it is filled with " poison to kill you". This information is manipulating the viewer and spreading false information. As I read this post i noticed the aggression in the accounts text which flagged the post to have information that would most likely be inaccurate. After reading the full post I noticed just how harmful this could be if taken out of the wrong context.
-
-
askjan.org askjan.org
-
Uninterrupted time that is reserved to work on job tasks is beneficial to those who have difficulty reorienting after being interrupted.
It can take me a long time to get in the zone only to be interrupted.
-
-
askjan.org askjan.org
-
Separating tasks is an accommodation when an individual has difficulty managing more than one task at a time. This strategy can be implemented by the individual or the individual's manager. Tasks should be prioritized and a new project may be assigned when the previous project is complete.
Written prioritization of tasks helps me decide what to work on.
Tags
Annotators
URL
-
-
askjan.org askjan.org
-
Job restructuring as a reasonable accommodation may involve reallocating or redistributing the marginal functions of a job.
Tags
Annotators
URL
-
-
blog.openstreetmap.org blog.openstreetmap.org
-
point-in-time recovery
According to Anthropic's Claude LLM:
Point-in-time recovery refers to the ability to restore a database or system to a specific moment in the past. This feature allows administrators to recover data to a precise point before a failure, corruption, or unintended change occurred.
Key aspects of point-in-time recovery include:
- Continuous backups: The system regularly captures changes to data.
- Granular restoration: Allows recovery to any specific time within the backup window.
- Minimal data loss: Helps reduce the amount of data lost between backups.
- Flexibility: Useful for various scenarios like correcting human errors or recovering from cyberattacks.
This capability is especially valuable in database management systems and cloud storage solutions, providing a safety net for data integrity and business continuity.
-
-
pdos.csail.mit.edu pdos.csail.mit.edu
-
In a closed group, only members can send messages to the group
А разве это не то же самое, что и source addressing?
-
If in this case FIFO (orcausal) ordering had been used, it might have happened that the servers applied the updates indifferent orders, resulting in inconsistent replicas.
Что-то я совсем не понял почему так? И как. это достигается в total order? Если в FIFO добавить еще какой-то сравнимый идентификатор, кроме времени отправки, то можно же сортировать по двум ключам: время и (например) хэш сообщения. И тогда будет конкретный порядок у этих 2 сообщений
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata).
I have never heard of the term Metadata previously, but I can see how it makes data interpretation much more simplified. It is important to know the context behind the data that is being collected.
-
Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata).
I find this to be interesting. I never thought of splitting the data types in a post to understand it better, but it makes sense now. The Metadata is less about the tweet and more about the background information of the post, while the Data is the main tweet and the point the person is trying to make.
-
Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata)
I think that the importance of metadata and the contextual power it holds is not often recognised. It adds another layer of depth to a post by including background information regarding the post. In addition, there is a sense of ownership of the post which is included as a part of metadata. However through a different perspective, it can also be deemed controversial as it is to some extent quite intrusive as it does expose user location, movements, behavioural insights and time stamps which a lot of users may not approve of.
-
-
musaint-my.sharepoint.com musaint-my.sharepoint.com
-
-
The Future of Autonomous Vehicles
How autonomous cars will reshape urban transportation. Ethical implications of self-driving vehicles. The potential for autonomous public transport systems.
-
Electric Vehicles (EVs) and Sustainability
The role of electric vehicles in reducing carbon emissions. The challenges and opportunities in EV infrastructure development. Comparison between EVs and traditional fuel-powered vehicles: environmental and economic perspectives.
-
Hyperloop Technology and Its Impact on Long-Distance Travel
How Hyperloop could revolutionize intercity transportation. Technological and financial challenges in implementing Hyperloop systems. Environmental benefits of ultra-high-speed transport.
-
Urban Mobility Innovations
How e-scooters, bikes, and ride-sharing apps are changing urban commuting. The future of smart cities and the role of connected transportation systems. Policy challenges surrounding micromobility in congested cities.
-
AI and Machine Learning in Transportation
How AI is optimizing traffic flow and reducing congestion. Predictive maintenance in transportation through machine learning. AI-driven logistics: revolutionizing freight and delivery systems.
-
Drones and Aerial Transportation
The use of drones for delivery services: possibilities and challenges. The future of passenger drones and urban air mobility. Regulatory and safety concerns surrounding the widespread use of drones.
-
Sustainable Transport Solutions
Innovations in public transport to make it more sustainable. The role of hydrogen fuel cells in the future of transportation. The importance of integrating renewable energy sources into transportation infrastructure.
-
The Role of Big Data in Transportation
How data analytics can improve traffic management and reduce congestion. The impact of big data on the logistics and freight industry. Privacy concerns and the use of personal data in smart transportation systems.
-
-
Here is what ChatGPT said to my question:
There isn't a universally agreed-upon definition of a "Smart City," but the concept generally refers to an urban area that uses technology and data-driven solutions to improve the quality of life for its citizens, enhance sustainability, and optimize the efficiency of city services.
A common definition of a Smart City typically includes the following elements:
1. Use of Technology and IoT (Internet of Things)
- Smart cities leverage connected devices and sensors to gather data in real-time. These technologies are used to monitor various aspects of urban life, such as traffic flow, energy consumption, and public safety.
2. Data-Driven Decision-Making
- The data collected from IoT devices and other sources is analyzed to make informed decisions and improve the delivery of public services. This data helps city administrators optimize resources, reduce waste, and respond more quickly to emerging issues.
3. Sustainability and Efficiency
- Smart cities aim to reduce environmental impact by improving energy efficiency, reducing pollution, and promoting sustainable transportation. This often includes initiatives like smart grids, electric vehicle infrastructure, and waste management systems.
4. Citizen Engagement and Quality of Life
- A key feature of smart cities is the use of digital platforms and apps to engage citizens directly in decision-making and to improve their experience with city services. This could range from real-time public transportation updates to platforms that allow residents to report issues like streetlight outages.
5. Integrated Infrastructure
- Smart cities focus on creating seamless systems that integrate transportation, energy, healthcare, public safety, and other key services into a unified, technology-driven network. This improves efficiency and reduces redundancies in city operations.
6. Resilience and Flexibility
- A smart city is often designed to be adaptable and resilient, using predictive analytics to anticipate challenges such as extreme weather events, infrastructure failures, or population growth, and to respond proactively.
In summary, a Smart City uses advanced technology and data to create more efficient, sustainable, and citizen-centered urban environments. Some examples include cities like Barcelona, Singapore, and Amsterdam, which have adopted smart city technologies to improve urban living.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Because all data is a simplification of reality, those simplifications work well for some people and some situations but can cause problems for other people and other situations.
I think this is a great way to describe data and the benefits it has on simplifying the findings of our reality. It is also important to note that there are benefits and drawbacks to the collection of data as it may help us collect intel for future development, but also can cause a breach in privacy.
-
Data collection and storage can go wrong in other ways as well, with incorrect or erroneous options. Here are some screenshots from a thread of people collecting strange gender selection forms:
Data collection practices should prioritize inclusivity and respect for diverse gender identities. It's crucial to offer comprehensive and sensitive options that accurately represent the full spectrum of gender expressions. Thoughtful design of gender selection forms not only improves data accuracy but also demonstrates respect for all individuals, fostering a more inclusive environment.
-
-
www.theplayerstribune.com www.theplayerstribune.com
-
The good and the bad.We have given each otherAll that we have.
This reminds me of anytime I hear someone describe something they have interest in. For instance, my mom really enjoys gardening but there are also times where she doesn't like to do it. That doesn't necessarily take away any of her love for that hobby. I think this helps the average person realize that Kobe isn't just a celebrity player but also a person that shares the same complex emotions as regular people.
-
I played through the sweat and hurtNot because challenge called meBut because YOU called me.
Shows his dedication to play basketball because of his love for it
-
clock
Tends to place somewhat similarly sounding or spelt words together at the ends of lines every so often, words like socks and clock, together and other.
-
5 … 4 … 3 … 2 … 1
Places periods and a line specifically for numbers at the for effect
-
I’ll always be that kid
Loops back to memories of his youth when he was just a kid excited to play, making another connection with the addressed audience
-
As a six-year-old boy
Bringing up a memory of being a young child and aspiring to become successful in what they love which can create a connection with the addressed audience
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fig. 4.5 The number of replies, retweets, and likes can be represented as integer numbers (197.8K can be stored as a whole number like 197,800).
The number of likes and comments on social media posts can provide valuable insights into user engagement and content popularity. These metrics often serve as indicators of a post's reach and impact, reflecting how well the content resonates with the audience However, it's important to note that while these numbers can be easily quantified (for example, 197,800 likes), their true meaning goes beyond mere statistics. They represent real people interacting with and responding to the content, potentially influencing opinions and sparking conversations
-
H”, “A”, “P”, “P”, “Y”, ” “, “B”, “I”, “R”, “T”, “H”, “D”, “A”, “Y”.
It's interesting that from the human perspective we are just spelling out the word "Happy Birthday" as a whole, but never come to realize or pay attention to the fact that the string is actually a combination of the different letters taht make up the word.
-
Computers typically store text by dividing the text into characters (the individual letters, spaces, numerals, punctuation marks, emojis, and other symbols). These characters are then stored in order and called strings (that is a bunch of characters strung together, like in Fig. 4.6 below).
This reminds me of the Java language I learned in CSE class. In Java, a series of characters is also called a string. But in Java, if you want to enter some specific symbols such as brackets and quotation marks in a string, you need to add "\" to make the machine recognize it smoothly.
-
When computers store numbers, there are limits to how much space is can be used to save each number. This limits how big (or small) the numbers can be, and causes rounding with floating-point numbers. Additionally, programming languages might include other ways of storing numbers, such as fractions, complex numbers, or limited number sets (like only positive integers).
This reminds me that computers have a similar problem when dealing with characters. This is because computers use an encoding set that can only contain a limited number of symbols and characters. Overall, computers, as powerful as they are, still have limitations in storing and processing numbers and symbols. These limitations come from the underlying hardware design and coding methods, not because the computer “doesn't want” to recognize everything, but more from technical trade-offs and design decisions.
-
Binary consisting of 0s and 1s make it easy to represent true and false values, where 1 often represents true and 0 represents false. Most programming languages have built-in ways of representing True and False values.
This was a shock to me, I didn't know much about binary, but it turns out that it has both true and false states. And I also just learned about the direct mapping of boolean types to binary, which makes the whole data analysis much simpler to compute.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
I found it interesting how the different datatypes are represented in Python. This is because I have been learning Java in case 121, and it gets tedious to specify the datatype on every variable you create. I also found it interesting to reflect on how social media posts affect the people who haven’t consented to being in them. An example that was given was about taking a selfie with a minor and posting it on social media without their consent. This is important to think about because a lot of people have been affected negatively by being put in social media posts that they didn’t consent to be in, so I believe it is important to ask for consent before filming someone and then posting it.
-
Now, there are many reasons one might be suspicious about utilitarianism as a cheat code for acting morally, but let’s assume for a moment that utilitarianism is the best way to go. When you undertake your utility calculus, you are, in essence, gathering and responding to data about the projected outcomes of a situation. This means that how you gather your data will affect what data you come up with. If you have really comprehensive data about potential outcomes, then your utility calculus will be more complicated, but will also be more realistic. On the other hand, if you have only partial data, the results of your utility calculus may become skewed. If you think about the potential impact of a set of actions on all the people you know and like, but fail to consider the impact on people you do not happen to know, then you might think those actions would lead to a huge gain in utility, or happiness.
This reminds me most of measuring value of life in systems such as trolley problems or AI car decision making. Is a doctor more worthy of being saved than a musician? Or a depressed person? Or a felon? Where do you draw the line? If you draw a line, how many "felon lives" equals one doctor life? Utilitarianism to me isn't a morality system itself but a coping mechanism to allow humans to rationalize tough decisions. But when humans put the same logic in computers, it's not a coping strategy for a computer's feelings, but just a flawed series of priorities.
-
Can you think of an example of pernicious ignorance in social media interaction? What’s something that we might often prefer to overlook when deciding what is important?
When charitable organization raising funds for patients suffering form specific illness, they may interview these patients and relatives of the patients to evoke empathy. However, they overlook the feeling of patients and their relatives. Letting patients recall their pains and speak in front of camera aggravates the pains of patients.
-
One classic example is the tendency to overlook the interests of children and/or people abroad when we post about travels, especially when fundraising for ‘charity tourism’. One could go abroad, and take a picture of a cute kid running through a field, or a selfie with kids one had traveled to help out. It was easy, in such situations, to decide the likely utility of posting the photo on social media based on the interest it would generate for us, without thinking about the ethics of using photos of minors without their consent. This was called out by The Onion in a parody article, titled “6-Day Visit To Rural African Village Completely Changes Woman’s Facebook Profile Picture”.
This paragraph makes a point that I really agree with. It is something that I have not considered before, but I have experienced personally. More specifically, many elders around me like to use their children's pictures as their social media profile pictures, and I didn't care about their behavior when I was young because I didn't use social media at all. But now I think it was really inappropriate, because it was really embarrassing to watch my childhood self wearing some funny clothes and being seen by everyone on social media. I think you need to be careful before using a particular image, or get permission from the people in the image.
-
Now, there are many reasons one might be suspicious about utilitarianism as a cheat code for acting morally, but let’s assume for a moment that utilitarianism is the best way to go. When you undertake your utility calculus, you are, in essence, gathering and responding to data about the projected outcomes of a situation. This means that how you gather your data will affect what data you come up with. If you have really comprehensive data about potential outcomes, then your utility calculus will be more complicated, but will also be more realistic. On the other hand, if you have only partial data, the results of your utility calculus may become skewed. If you think about the potential impact of a set of actions on all the people you know and like, but fail to consider the impact on people you do not happen to know, then you might think those actions would lead to a huge gain in utility, or happiness.
This text points out the impact utilitarianism may have on moral decision-making. In other words, utilitarianism has certain limitations. In detail, if the data is not complete in the process of data collection, then its final result will have a certain bias. For example, only considering the impact of people you know and like on yourself, and ignoring the impact of people you don't know on yourself, then the results of the experiment are not comprehensive enough.
-
-
legacy.cs.indiana.edu legacy.cs.indiana.edu
-
It is not unusual for software modifications to bemade in the field. Programmers are transported by heli-copter to Navy ships:
As technology advance, many programming work can be done remotely, Therefore, programmers doesn't need to transport around the fields for software modification. Instead, they could work anywhere they want.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
This part makes me wonder if when we place constraints in the python language, how the syntax would look like. Another thing that I wonder about here is what makes a string with letters like "a" different from emojis in the makeup.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Images are created by defining a grid of dots, called pixels. Each pixel has three numbers that define the color (red, green, and blue), and the grid is created as a list (rows) of lists (columns).
Are these the three primary colors that can make up all colors? This is very clever, only three colors can make up all colors, but I am curious how the computer can mix these colors in the correct proportions to produce the exact color? At the same time, how is white composed?
-
Images are created by defining a grid of dots, called pixels. Each pixel has three numbers that define the color (red, green, and blue), and the grid is created as a list (rows) of lists (columns).
This reminds me of every time I try to take a picture of my computer screen with my phone camera. The image always shows tiny red, green, and blue lights, which is probably due to a similar reason. A display is made up of many tiny pixels, and each of these pixels is composed of subpixels that emit red, green, or blue light. When you try to capture the screen, the camera picks up these individual subpixels, revealing the RGB pattern that our eyes don’t normally see. This highlights how the underlying structure of digital displays, just like the limitations of computer storage, affects what we see and how information is processed.
-
有多种保存日期和时间的方法。有些选项包括一系列数字(年、月、日、时、分和秒),或包含所有这些信息的字符串。有时只保存日期,不保存时间信息,有时时间信息会包含时区。
There are different benefits to different ways of representing time, such as including the region in the time information, which is very helpful for people in foreign countries so that we can be sure of when the tweet was actually sent. For example, if I'm studying in the US and my family sends a tweet from China, not including the time in the region would make it impossible to determine exactly which “yesterday” the tweet was sent.
-
-
www.independent.co.uk www.independent.co.uk
-
items from the prison have been put in the resistance museum in Glieres,
Fun fact
-
There he was tortured by the infamous Gestapo officer Klaus Barbie, and his assistants.
Look up Barbie
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Data points often give the appearance of being concrete and reliable, especially if they are numerical. So when Twitter initially came out with a claim that less than 5% of users are spam bots, it may have been accepted by most people who heard it. Elon Musk then questioned that figure and attempted to back out of buying Twitter, and Twitter is accusing Musk’s complaint of being an invented excuse to back out of the deal, and the case is now in court.
I think Data and numbers are the type of information humans absorb the most. It simplifies it and makes it easier to understand, but this leaves gaps in the data that people kind of fill in themselves. Since the data doesn't tell the whole story and is more of just the final product, it isn't always reliable, and we shouldn't only focus on that.
-
As you can see in the apple example, any time we turn something into data, we are making a simplification.1 If we are counting the number of something, like apples, we are deciding that each one is equivalent. If we are writing down what someone said, we are losing their tone of voice, accent, etc. If we are taking a photograph, it is only from one perspective, etc. Different simplifications are useful for different tasks. Any given simplification will be helpful for some tasks and be unhelpful for others. See also, this saying in statistics: All models are wrong, but some are useful
The article's apple example ignores the variations in each apple, such as size, color, and quality, by simply counting the quantity of apples. Similar to this, when you record a conversation, the emotional details like tone and intonation are lost even though the text material is recorded. Moreover, taking a picture can only depict a portion of the scene; it cannot depict the entire scene. Every simplification technique has its limitations, but the effectiveness of each technique is determined by how well it can deliver relevant information for a given task in a given situation.
-
-
guides.loc.gov guides.loc.gov
-
Unfortunately, it was entirely avoided by the Germans as their superior Panzer tanks (Panzerkampfwagen) plowed through the "impenetrable" Ardennes forest as part of the Blitzkrieg that characterized early German victories in WWII.
This makes more sense
-
historians point to the fatigue of the French citizenry, who were still recovering from the loss of young men killed during WWI.
I have wondered why Germany was able to so easily march into France, this makes sense.
-
Pétain headed the new government from the southern spa town of Vichy, but his authority was limited, and many regarded the Vichy regime as a mere puppet government
Vichy Water - Casablanca
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Design a social media site
(Not designing one myself, but breaking down a funny social media sight that I've seen) Pithee was a sight designed as a way to view and rank "shitposts" at a rapid pace. To accomplish this, the sight is laid out with a banner and 5 blocks in the middle. The banner is small, has the logo and donation links in the corners, and has leaderboard/profile/post in center. The 5 blocks have 4 randomized posts by other users, the "most voted on winner" from the last 15 minutes at the top, and a shuffle button at the bottom.
This layout prioritizes reading anonymous user's posts, deprioritizes users' personal scores, and makes branding and donation opportunities only for those who want to support the platform.
-
-
pressbooks.pub pressbooks.pub
-
Not so, O friends! will the God deign to enter and inhabit you, but by a method precisely the reverse.
...we can inhabit God?
-
The Democrats from New Hampshire!
Was this written before or after the party switch?
-
Columbus found the New World
Side-eyeing this one...
-
For every Stoic was a Stoic; but in Christendom where is the Christian?
What a line!!
-
Where is the master who could have taught Shakspeare? Where is the master who could have instructed Franklin, or Washington, or Bacon, or Newton? Every great man is a unique.
This is ignoring a lot of progress and people who learned from those who came before them...everything we have right now has been steadily built up over time.
-
Swedenborgism
Never heard of this until now - I looked it up, and it's a religion referred to as The New Church and seems to be related to Christianity?
-
As men’s prayers are a disease of the will, so are their creeds a disease of the intellect.
This sounds very discrediting to those who find strength through prayer and their religious doctrine.
-
But prayer as a means to effect a private end is meanness and theft.
What about prayer for personal protection? Where does that fall in the realm of the as mentioned goodness and selfishness?
-
for he does not postpone his life, but lives already
I feel like the idea of postponing living how you want to for the sake of something else, like getting married/getting a degree/etc. is around a lot today as well.
-
We shun the rugged battle of fate, where strength is born.
Weirdly enough, this reminds me of the Illiad & the Odyssey.
-
I like the silent church before the service begins, better than any preaching.
For whatever reason, this was really powerful to me. It really is a certain feeling to be standing alone in a silent church - the architecture itself, stained glass and wood, feels more powerful than any preaching I've heard.
-
Truth and Right
Especially because of the capitalization, I'd like to see what Emerson defines "truth" and "right" as.
-
It is as easy for the strong man to be strong, as it is for the weak to be weak.
It's that same rhetorical device I mentioned before!
-
Man is timid and apologetic; he is no longer upright; he dares not say ‘I think,’ ‘I am,’ but quotes some saint or sage.
Quoting the words of others does enhance the credibility of an argument, though!
-
Is the parent better than the child into whom he has cast his ripened being?
They're their own individuals!
-
Who is the Trustee?
who to trust?
-
in the world which exists for him
in the world that exists for man. ok.
-
all history resolves itself very easily into the biography of a few stout and earnest persons.
few?
-
the soldier should receive his supply of corn, grind it in his hand-mill, and bake his bread himself.
someone still has to supply him the corn, making him rely on someone else for his food, thus breaking self-reliance completely down
-
a true man belongs to no other time or place, but is the centre of things. Where he is, there is nature.
actually, I'm quite intrigued about WHAT man gets to be in the center of everything..... who? and, what would happen if every man chose to be in the spotlight? then what?
-
Shakspeare will never be made by the study of Shakspeare
Then why preserve or analyze Shakespeare 100s of years later if it is not made by its study?
-
a true man belongs to no other time or place, but is the centre of things. Where he is, there is nature.
classsicccccccccc
-
Welcome evermore to gods and men is the self-helping man. For him all doors are flung wide
Is man self helping if he relies on a deity?
-
Consider whether you have satisfied your relations to father, mother, cousin, neighbour, town, cat, and dog; whether any of these can upbraid you.
Contradictory advice as the Emerson who supported christianity, is actively telling people to not trust their father and mother in direct violation of the fifth commandment
-
If we cannot at once rise to the sanctities of obedience and faith, let us at least resist our temptations
Won't human impulse and genetic desires make this next to near impossible
-
But now we are a mob
Mobs and group thinking is not new for the 1800's because of past historical examples in places like Greece.
-
‘I think,’ ‘I am,’
It that not the result of what self reliance may preform?
-
It seems to be a rule of wisdom never to rely on your memory alone, scarcely even in acts of pure memory
Would it not help then to rely on others to take the load of remembering so much?
-
Meantime nature is not slow to equip us in the prison-uniform of the party to which we adhere.
How do we get freedom from nature then?
-
but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.
How can this be beneficial to reject others?
-
There is the man and his virtues.
Assumption that all men are inherently virtuous
-
feminine rage
this might be a wrong, but is Emerson using "feminine rage" to further the idea that women are emotional and angry? Like is that why the use of feminine is used here without the context of any woman being involved?
-
truth is handsomer than the affectation of love
Should there not be a balance to avoid a life of misery?
-
“Man is his own star; and the soul that can Render an honest and a perfect man, Commands all light, all influence, all fate; Nothing to him falls early or too late.
I'm confused about what this exactly is saying.
-
but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.
regardless of outside noise, the ability to stay content and focus is the key to happiness.
-
Thoughtless people contradict as readily the statement of perceptions as of opinions, or rather much more readily; for, they do not distinguish between perception and notion.
It's important to know the difference between the objective and the subjective!
-
It is easy in the world to live after the world’s opinion; it is easy in solitude to live after our own
the voice of the world can really take a toll on what you believe or wish to do. Once you live by your own wishes and ideas, everything tunes out.
-
What I must do is all that concerns me, not what the people think
minding his own business, expectation that others will do the same.
-
The world has been instructed by its kings, who have so magnetized the eyes of nations. It has been taught by this colossal symbol the mutual reverence that is due from man to man. The joyful loyalty with which men have everywhere suffered the king, the noble, or the great proprietor to walk among them by a law of his own, make his own scale of men and things, and reverse theirs, pay for benefits not with money but with honor, and represent the law in his person, was the hieroglyphic by which they obscurely signified their consciousness of their own right and comeliness, the right of every man.
I wonder what he'd think of people like Elon Musk...I feel like the author would own a Cyber Truck. /j
-
all history resolves itself very easily into the biography of a few stout and earnest persons.
I don't agree with this - I feel like history is a culmination of an incredible amount of people, ideas, thoughts, and movements. I feel like this can also be really exclusive of people who were still important but not put into general historical texts because of not being a cis straight white Christian man?
-
Greatness appeals to the future.
Reminds me of how history is recorded, and who we remember in its pages.
-
Your genuine action will explain itself, and will explain your other genuine actions. Your conformity explains nothing.
Be genuine and hold true to your values, they chart the path.
-
My book should smell of pines and resound with the hum of insects.
I love the vibe of the book, but I'd hate the insects.
-
-
www.edutopia.org www.edutopia.org
-
An authentic audience breathes life into both tech-rich and low-tech tasks
I really appreciate the author's emphasis on the student environment. Technology isn't the goal in itself; it is a tool for the student to use.
-
-
doc-04-1g-prod-01-apps-viewer.googleusercontent.com doc-04-1g-prod-01-apps-viewer.googleusercontent.com
-
The meaning of a social relationship may be agreed upon bymutual consent.
does this discount hostile relationships
-
The meaningful content which remains relatively constant in asocial relationship is capable of formulation in terms of maxims whichthe parties concerned expect to be adhered to by their partners on theaverage and approximately
wtf
-
"correct" or a metaphysically "true" meaning.
not one "true" concrete meaning, just the theoretically formulated one or the ones imbued by the actors
-
Hence, the definition does not specify whetherthe relation of the actors is co-operative or the opposite
oriented towards others- doesn't need to be co-operative
-
t would be very unusual to find concrete cases of action, espe-cially of social action, which were oriented only in one or another ofthese ways. Furthennore. this classification of the modes of orientationof action is in no sense meant to exhaust the possibilities of the field,but. only to fonnulate in conceptually pure fonn certain sociologicallyimportant types to which actual action is more or less closely approxi-mated or, in much the more common case, which constitute it; de"ments.
most cases not concrete- combination of above orientatiosn
-
him
"irrational" value driven values that are not in the individuals best interest
-
clearly self-conscious fonnulation, of the ultimatevalues governing the action and the consistently planned orientation ofits detailed course to these values
pre-plannedness and consciousness of value-rational action distinguishes from actual
-
these expectations are used as "conditions" or"means" for the attainment of the actor's own rationally pursued andcalculated ends
first type of social action- expectations of behavior from environment and other people- rational for what someone wants
-
But conceptually it is essential todistinguish them, even though merely reactive imitation may well havea degree of sociological importance at least equal to that of the typewhich can be called social action in the strict sense.
need to distinguish meaningful orientation form influences even though its hard to figure out what is the true social action
-
both the orientation tothe behavior of others and the meaning which can be imputed by theactor himself, are by no means always capable of clear determination andare often altogether unconscious and seldom fully self-conscious.
who its for and how the actor articulates why they do something is no wholly conscious, often isn't.
-
both
if individual is replicating action for the purpose of social orientation (fashion trends for status) it is meaningful social action.
-
found to employ some apparently useful procedurewhich he learn\.:d from someone else does not, however, constitute, in thepresent sense, social action. Action such as this is not oriented to theaction of the other person, but the actor has, through observing theother, become acquainted with certain objective facts; and it is these towhich his action is oriented
copying of others behavior as useful means to an end isn't inherently social
-
n such cases as that of the influence of the demagogue,there may be a wide variation in the extent to which his mass clientele isaffected by a meaningful reaction to the fact of its large numbers; andwhatever this relation may be, it is open to varying interpretations
actions within crows not considered at a high level of meaning but if it does have implications there are many possible interpretations.
-
Others become more difficult under these conditions. Hence it ispossible that a particular event or mode of human behavior can give riseto the most diverse kinds of feeling-gaiety, anger, enthusiasm, despair,and passions of all sorts-in a crowd situation which would not occur atall or not nearly so readily if the individual were alone.
sometimes people experience something that can only be experienced in a crowd- cannot achieve similar things
-
action conditioned by crowd
actions conditioned by crowds is "crowd psychology" differs from the case of many people doing the same thing because they are being influenced by the same thing
-
The economic activity of an individualis social only if it takes account of the behavior of someone else. Thusvery generally it becomes social insofar as the actor assumes that otherswill respect his actual control over economic gocxls.
well isn't everything social economically then? DING DING DING DURKHEIM
-
which includes both failure to act and passiveacquiescence, may, be oriented to the past, present, or expected futurebehavior of others
cool
-
But the difficulty need not prevent the sociologist from systematizing hisconcepts by the classification of possible types of subjective meaning.That is, he may reason as if action actually proceeded on the basis ofclearly self-conscious meaning. The resulting deviation from the concretefacts must continually be kept in mind whenever it, is a question of thislevel of concreteness, and must be carefully studied with reference bothto degree and kind
peoples of lack of consciousness of their meaning doesn't mean it should be taken less seriously as motive.
-
The theoreticaloconcepts of sociology are ideal types not only from theobjective point of view, but also in their application to subjective proc-esses. In the great majority of cases actual action goes on in a state of in-articulate half.consciousness or actual unconsciousness of its subjectivemeaning. The actor is more likely to "be aware" of it in a vague sense thanhe is to "know" what he is doing or he explicitly self-conscious about it.In most cases his action is governed by impulse or habit.
theoretical concepts also theoretical in that the actor "know" why they do something
Annotators
URL
-
-
static1.squarespace.com static1.squarespace.com
-
What festivals of atonement, what sacred gamesshall we need to invent?
atonement: the reconciliation of god and humans with Jesus christ
-
How were we able to drink up the sea? Whogave us the sponge to wipe away the entire horizon? What did we do when weunchained the earth from its sun?
how do we witness "gods work" despite killing him?
-
many of those who did not believe in God were standing together there, he excitedconsiderable laughter
mocking the "madman" for believing in god in this alternative universe where religion haven't taken a significant place as our world today
-
-
react.dev react.dev
-
you specify a CSS class with className. It works the same way as the HTML class att
Test anno
-
s are made out of components. A component is a piece of the UI (user interface) that has its own logic and appearance. A component can be as small as a bu
this is a test
-
-
docdrop.org docdrop.orgview2
-
Fiscal neutrality would thus respect the wishes of local taxpayers, but it would not necessarily meet the needs of local students. Indeed, fiscal neutrality really promised equity for tax-payers, who could be assured that the same tax rates would result in the same amount of funding for schools. It did not guarantee equity for students.
Honestly, I am a bit lost reading about fiscal neutrality and what it means for the amount of funding schools will get. Mostly about the taxing part of the it, I'm a bit confused on how this changes funding in rich and poor districts. It also says here that fiscal neutrality would "respect the wishes of local taxpayers", so is it more about "getting their money's worth", but not necessarily looking at students' needs and basing funding off of that?
-
In the past, most states simply provided a flat grant to school districts based on the number of students in the district. Each student received an equal amount of funding, which obviously did nothing to offset the inequalities in local funding
Giving an equal amount of funding to each student is an example of equality but schools should instead meet needs based on equity. Instead of treating everyone the same and giving the same amount of resources and funding for each student, giving based on the individual needs of students would instead help a lot more. Some students may live in poverty, while others may not. Some students may have disabilities that require more resources.
-
-
www.americanyawp.com www.americanyawp.com
-
Plan Espiritual de Aztlán, a Chicano nationalist manifesto that reflected Gonzales’s vision of Chicanos as a unified, historically grounded, all-encompassing group fighting against discrimination in the United States.
Fighting Mexican discrimination
-
March on Washington. The march called for, among other things, civil rights legislation, school integration, an end to discrimination by public and private employers, job training for the unemployed, and a raise in the minimum wage.
March for national action instead of the slow-moving state governments which prolonged segregation.
-
President Lyndon Johnson
signed Civil Rights Act
-
Medgar Evers was assassinated at his home in Jackson, Mississippi.
Murdered civil rights leader
-
the Albany Movement,
New York civil rights movement
-
The Albany Movement included elements of a Christian commitment to social justice in its platform, with activists stating that all people were “of equal worth” in God’s family and that “no man may discriminate against or exploit another.”
A brave movement in such a racist city.
-
-
furnaceandfugue.org furnaceandfugue.org
-
A WolfeWolf coming from the East, and a DoggeDog from the West werry'dwerried one another.
This artists has never seen a wolf before lmao
-
, because fires meeting together doedo one destroy the other.
guess you CAN fight fire with fire
-
But the wolfewolf recovering strength afterwards overthrowesoverthrows the doggedog, and be=ing cast downedown never leaves him till heehe be utterly killdkilled and dead; - In the meanemean time receiving from the doggedog noeno lesseless wounds nor lesseless mortallmortal, than heehe gave him, till they werry one another to death:
Dog is domesticated/merciful but has breeding that makes it superior to the wolf in some way, both strengths and weaknesses lead to deaths of both. Likewise acidic concoctions are usually destroyed or transformed by alkaline mixtures, but itself transforms the other
-
AvicenneAvicenna saythsays they lyelie in - dung neglected and rejected by the vulgar, which, if they be joyndjoined together, are able to complete the Magistery
Arabic writings finally reincorporated into European knowledge and discourse by 1618 after humanist rejection
-
-
www.americanyawp.com www.americanyawp.com
-
offered low-interest home loans, a stipend to attend college, loans to start a business, and unemployment benefits.
Helped military "servicemen"
-
Federal Housing Administration (FHA),
Mortgage insurance and protection
-
Home Owners’ Loan Corporation (HOLC)
refinanced mortgages so that people could have more time to pay their loans
-
with all deliberate speed” was so vague and ineffectual that it left the actual business of desegregation in the hands of those who opposed it.
Brown tried to desegregate schools but this phrase was almost fatal to the attempt because some state's "deliberate speed" was very slow aka never.
-
Levittown, the prototypical suburban community, in 1946 in Long Island, New York. Purchasing large acreage, subdividing lots, and contracting crews to build countless homes at economies of scale, Levitt offered affordable suburban housing to veterans and their families
Levitt invested in suburban developmen for affordable housing
-
Sarah Keys v. Carolina Coach Company, in which the Interstate Commerce Commission ruled that “separate but equal” violated the Interstate Commerce Clause of the U.S. Constitution.
desegregation of interstate travel
-
Shelley v. Kraemer, declared racially restrictive neighborhood housing covenants—property deed restrictions barring sales to racial minorities—legally unenforceable.
Supreme Court outlawed discrimination against Black people in house sales.
-
n Shelley v. Kraemer
Supreme Court ruled to eliminate racist housing restrictions
-
-
dataxdesign.io dataxdesign.io
-
His scholarly expertise and lived experience together pointed to the fact that, on its own, data visualization could not hope to convey a complete picture of the progress of Black Americans to date, nor could it convey the extent of the obstacles that were required to be overcome.
I do think these limits make sense but if there are more specific examples listed here I would be clearer. Data analysis and visualization are meant to narrate latent info and give out a more general description of numbers, but here the author mentioned the "limit". How do we consider the limit?
-
-
furnaceandfugue.org furnaceandfugue.org
-
But - saythsays Count Bernhard in his Epistle, I tell you truelytruly, that noeno water dissolves a metallickemetallic species by naturallnatural reduction, except that which continues with - it in matter and formeform, and which the metallsmetals themselves can recongealerecongeal:
Beginning of understanding stable vs unstable elements, based on electrons later on. At least there's some Chymystry here
-
if it be not suppositious
Oh yeah, this book is very good at not supposing things. Glad to see the extreme dedication and care made to fact checks
-
What wonder therefore, if the Philosophers would have their dragon shuttshut up in a cavernecavern with a - woman?
As odd as this all is, it does capture the Aristotelean idea of observing nature in its "natural state," i.e. Don't put a cat into water to see how it will act, same with Dragons and women I suppose?
-
then will Pluto blow a - blast, and draw a volatile fiery spirit out of the cold dragon, which with its great heat will burneburn the Eagles feathers, and excite such a sudorifickesudorific bath, as to melt the Snow at the top of the mountains, - and turn it into water;
Let him cook?
-
-
canvas.tufts.edu canvas.tufts.eduFiles3
-
HIJOS trying to combat the attrocities.
-
Military Dictatorships are detrimental in the community.
-
Preparing the community for crimes and trauma.
-
-
bongolearn.zendesk.com bongolearn.zendesk.com
-
If you are manually assigning learners to groups, each learner must open a Bongo page while logged into their individual account before you assign them into groups. Learners who have not yet accessed Bongo will not appear in the lists described below for group assignment. Once students see the Bongo interface (e.g. their activity or a list of activities), then no further action is required on their part; they are now ready for group assignment.
Doh!
-
-
www.opb.org www.opb.org
-
In a press release distributed Saturday afternoon, Portland police said its officers did not intervene to stop the fighting because those involved “willingly” engaged, its forces were stretched too thin from policing 80+ nights of protests, and the bureau didn’t feel the clashes would last that long.
beginning of break down
-
“Anyone who is involved in criminal behavior is subject to arrest and/or citation. Criminal conduct may also subject you to the use of force, including, but not limited to, crowd control agents and impact weapons. Stop participating in criminal behavior,” Portland police officials tweeted.
law intervention
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What actions would you want one or two steps away?
I think that any sort of contribution that users make to the platform that is visible to other users should be at least one or two steps away. This is important because it is a HUGE deterrent for bots and malicious content if their actions are not immediately reflected on the platform and this would reduce TOS-breaking activity on the platform.
-
What actions would you not allow users to do
I would never allow users to alter someone else's profile including removing any of their followers or posts since that would defeat the point of a social media. Additionally I think that not even the company that runs a social media should be able to do that since it restricts the freedom of the users of the platform.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
Over the next few lessons and the wider course, we'll be covering storage a lot.
And the exam expects you to know the appropriate type of storage to pick for a given situation.
So before we move on to the AWS specific storage lessons, I wanted to quickly do a refresher.
So let's get started.
Let's start by covering some key storage terms.
First is direct attached or local attached storage.
This is storage, so physical disks, which are connected directly to a device, so a laptop or a server.
In the context of EC2, this storage is directly connected to the EC2 hosts and it's called the instance store.
Directly attached storage is generally super fast because it's directly attached to the hardware, but it suffers from a number of problems.
If the disk fails, the storage can be lost.
If the hardware fails, the storage can be lost.
If an EC2 instance moves between hosts, the storage can be lost.
The alternative is network attached storage, which is where volumes are created and attached to a device over the network.
In on-premises environments, this uses protocols such as iSCSI or Fiber Channel.
In AWS, it uses a product called Elastic Blockstore known as EBS.
Network storage is generally highly resilient and is separate from the instance hardware, so the storage can survive issues which impact the EC2 host.
The next term is ephemeral storage and this is just temporary storage, storage which doesn't exist long-term, storage that you can't rely on to be persistent.
And persistent storage is the next point, storage which exists as its own thing.
It lives on past the lifetime of the device that it's attached to, in this case, EC2 instances.
So an example of ephemeral storage, so temporary storage, is the instance store, so the physical storage that's attached to an EC2 host.
This is ephemeral storage.
You can't rely on it, it's not persistent.
An example of persistent storage in AWS is the network attached storage delivered by EBS.
Remember that, it's important for the exam.
You will get questions testing your knowledge of which types of storage are ephemeral and persistent.
Okay, next I want to quickly step through the three main categories of storage available within AWS.
The category of storage defines how the storage is presented either to you or to a server and also what it can be used for.
Now the first type is block storage.
With block storage, you create a volume, for example, inside EBS and the red object on the right is a volume of block storage and a volume of block storage has a number of addressable blocks, the cubes with the hash symbol.
It could be a small number of blocks or a huge number, that depends on the size of the volume, but there's no structure beyond that.
Block storage is just a collection of addressable blocks presented either logically as a volume or as a blank physical hard drive.
Generally when you present a unit of block storage to a server, so a physical disk or a volume, on top of this, the operating system creates a file system.
So it takes the raw block storage, it creates a file system on top of this, for example, NTFS or EXT3 or many other different types of file systems and then it mounts that, either as a C drive in Windows operating systems or the root volume in Linux.
Now block storage comes in the form of spinning hard disks or SSDs, so physical media that's block storage or delivered as a logical volume, which is itself backed by different types of physical storage, so hard disks or SSDs.
In the physical world, network attached storage systems or storage area network systems provide block storage over the network and a simple hard disk in a server is an example of physical block storage.
The key thing is that block storage has no inbuilt structure, it's just a collection of uniquely addressable blocks.
It's up to the operating system to create a file system and then to mount that file system and that can be used by the operating system.
So with block storage in AWS, you can mount a block storage volume, so you can mount an EBS volume and you can also boot off an EBS volume.
So most EC2 instances use an EBS volume as their boot volume and that's what stores the operating system, and that's what's used to boot the instance and start up that operating system.
Now next up, we've got file storage and file storage in the on-premises world is provided by a file server.
It's provided as a ready-made file system with a structure that's already there.
So you can take a file system, you can browse to it, you can create folders and you can store files on there.
You access the files by knowing the folder structure, so traversing that structure, locating the file and requesting that file.
You cannot boot from file storage because the operating system doesn't have low-level access to the storage.
Instead of accessing tiny blocks and being able to create your own file system as the OS wants to, with file storage, you're given access to a file system normally over the network by another product.
So file storage in some cases can be mounted, but it cannot be used for booting.
So inside AWS, there are a number of file storage or file system-style products.
And in a lot of cases, these can be mounted into the file system of an operating system, but they can't be used to boot.
Now lastly, we have object storage and this is a very abstract system where you just store objects.
There is no structure, it's just a flat collection of objects.
And an object can be anything, it can have attached metadata, but to retrieve an object, you generally provide a key and in return for providing the key and requesting to get that object, you're provided with that object's value, which is the data back in return.
And objects can be anything, there can be binary data, they can be images, they can be movies, they can be cat pictures, like the one in the middle here that we've got of whiskers.
If they can be any data really that's stored inside an object.
The key thing about object storage though is it is just flat storage.
It's flat, it doesn't have a structure.
You just have a container.
In AWS's case, it's S3 and inside that S3 bucket, you have objects.
But the benefits of object storage is that it's super scalable.
It can be accessed by thousands or millions of people simultaneously, but it's generally not mountable inside a file system and it's definitely not bootable.
So that's really important, you understand the differences between these three main types of storage.
So generally in the on-premises world and in AWS, if you want to utilize storage to boot from, it will be block storage.
If you want to utilize high performance storage inside an operating system, it will also be block storage.
If you want to share a file system across multiple different servers or clients or have them accessed by different services, that can often be file storage.
If you want large access to read and write object data at scale.
So if you're making a web scale application, you're storing the biggest collection of cat pictures in the world, that is ideal for object storage because it is almost infinitely scalable.
Now let's talk about storage performance.
There are three terms which you'll see when anyone's referring to storage performance.
There's the IO or block size, the input output operations per second, pronounced IOPS, and then the throughput.
So the amount of data that can be transferred in a given second, generally expressed in megabytes per second.
Now these things cannot exist in isolation.
You can think of IOPS as the speed at which the engine of a race car runs at, the revolutions per second.
You can think of the IO or block size as the size of the wheels of the race car.
And then you can think of the throughput as the end speed of the race car.
So the engine of a race car spins at a certain revolutions, whether you've got some transmission that affect that slightly, but that transmission, that power is delivered to the wheels and based on their size, that causes you to go at a certain speed.
In theory in isolation, if you increase the size of the wheels or increase the revolutions of the engine, you would go faster.
For storage and the analogy I just provided, they're all related to each other.
The possible throughput a storage system can achieve is the IO or the block size multiplied by the IOPS.
As we talk about these three performance aspects, keep in mind that a physical storage device, a hard disk or an SSD, isn't the only thing involved in that chain of storage.
When you're reading or writing data, it starts with the application, then the operating system, then the storage subsystem, then the transport mechanism to get the data to the disk, the network or the local storage bus, such as SATA, and then the storage interface on the drive, the drive itself and the technology that the drive uses.
There are all components of that chain.
Any point in that chain can be a limiting factor and it's the lowest common denominator of that entire chain that controls the final performance.
Now IO or block size is the size of the blocks of data that you're writing to disk.
It's expressed in kilobytes or megabytes and it can range from pretty small sizes to pretty large sizes.
An application can choose to write or read data of any size and it will either take the block size as a minimum or that data can be split up over multiple blocks as it's written to disk.
If your storage block size is 16 kilobytes and you write 64 kilobytes of data, it will use four blocks.
Now IOPS measures the number of IO operations the storage system can support in a second.
So how many reads or writes that a disk or a storage system can accommodate in a second?
Using the car analogy, it's the revolutions per second that the engine can generate given its default wheel size.
Now certain media types are better at delivering high IOPS versus other media types and certain media types are better at delivering high throughput versus other media types.
If you use network storage versus local storage, the network can also impact how many IOPS can be delivered.
Higher latency between a device that uses network storage and the storage itself can massively impact how many operations you can do in a given second.
Now throughput is the rate of data a storage system can store on a particular piece of storage, either a physical disk or a volume.
Generally this is expressed in megabytes per second and it's related to the IO block size and the IOPS but it could have a limit of its own.
If you have a storage system which can store data using 16 kilobyte block sizes and if it can deliver 100 IOPS at that block size, then it can deliver a throughput of 1.6 megabytes per second.
If your application only stores data in four kilobyte chunks and the 100 IOPS is a maximum, then that means you can only achieve 400 kilobytes a second of throughput.
Achieving the maximum throughput relies on you using the right block size for that storage vendor and then maximizing the number of IOPS that you pump into that storage system.
So all of these things are related.
If you want to maximize your throughput, you need to use the right block size and then maximize the IOPS.
And if either of these three are limited, it can impact the other two.
With the example on screen, if you were to change the 16 kilobyte block size to one meg, it might seem logical that you can now achieve 100 megabytes per second.
So one megabyte times 100 IOPS in a second, 100 megabytes a second, but that's not always how it works.
A system might have a throughput cap, for example, or as you increase the block size, the IOPS that you can achieve might decrease.
As we talk about the different AWS types of storage, you'll become much more familiar with all of these different values and how they relate to each other.
So you'll start to understand the maximum IOPS and the maximum throughput levels that different types of storage in AWS can deliver.
And you might face exam questions where you need to answer what type of storage you will pick for a given level of performance demands.
So it's really important as we go through the next few lessons that you pay attention to these key levels that I'll highlight.
It might be, for example, that a certain type of storage can only achieve 1000 IOPS or 64000 IOPS.
Or it might be that certain types of storage cap at certain levels of throughput.
And you need to know those values for the exam so that you can know when to use a certain type of storage.
Now, this is a lot of theory and I'm talking in the abstract and I'm mindful that I don't want to make this boring and it probably won't sink in and you won't start to understand it until we focus on some AWS specifics.
So I am going to end this lesson here.
I wanted to give you the foundational understanding, but over the next few lessons, you'll start to be exposed to the different types of storage available in AWS.
And you will start to paint a picture of when to pick particular types of storage versus others.
So with that being said, that's everything I wanted to cover.
I know this has been abstract, but it will be useful if you do the rest of the lessons in this section.
I promise you this is going to be really valuable for the exam.
So thanks for watching.
Go ahead and complete the video.
When you're ready, you can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this brief demo lesson I want to give you some experience of working with both EC2 instance connect as well as connecting with a local SSH client.
Now these are both methods which are used for connecting to EC2 instances both with public IP version 4 addressing and IP version 6 addressing.
Now to get started we're going to need some infrastructure so make sure that you're logged in as the IAM admin user into the general AWS account which is the management account of the organization and as always you'll need the northern Virginia region selected.
Now in this demonstration you are going to be connecting to an EC2 instance using both instance connect and a local SSH client and to use a local SSH client you need a key pair.
So to create that let's move across to the EC2 console, scroll down on the left and select key pairs.
Now you might already have key pairs created from earlier in the course.
If you have one created which is called A4L which stands for Animals for Life then that's fine.
If you don't we're going to go ahead and create that one.
So click on create key pair and then under name we're going to use A4L.
Now if you're using Windows 10 or Mac OS or Linux then you can select the PEM file format.
If you're using Windows 8 or prior then you might need to use the putty application and to do that you need to select PPK.
But for this demonstration I'm going to assume that you're using the PEM format.
So again this is valid on Linux, Mac OS or any recent versions of Microsoft Windows.
So select PEM and then click on create key pair and when you do it's going to present you with a download.
It's going to want you to save this key pair to your local machine so go ahead and do that.
Once you've done that from the AWS console attached to this lesson is a one-click deployment link.
So I want you to go ahead and click that link.
That's going to move you to a quick create stack screen.
Everything should be pre-populated.
The stack name should be EC2 instance connect versus SSH.
The key name box should already be pre-populated with A4L which is a key that you just created or one which you already had.
Just move down to the very bottom, check the capabilities box and then click on create stack.
Now you're going to need this to be in a create complete state before you continue with the demo lesson.
So pause the video, wait for your stack to change to create complete and then you're good to continue.
Okay so this stacks now in a create complete status and we're good to continue.
Now if we click on the resources tab you'll see that this has created the standard animals for life VPC and then it's also created a public EC2 instance.
So this is an EC2 instance with a public IP version 4 address that we can use to connect to.
So that's what we're going to do.
So click on services and then select EC2 to move to the EC2 console.
Once you're there click on instances running and you should have a single EC2 instance A4L-publicEC2.
Now the two different ways which I want to demonstrate connecting to this instance in this demo lesson are using a local SSH client and key based authentication and then using the EC2 instance connect method.
And I want to show you how those differ and give you a few hints and tips which might come in useful for production usage and for the exams.
So if we just go ahead and select this instance and then click on the security tab you'll see that we have this single security group which is associated to this instance.
Now make sure the inbound rules is expanded and just have a look at what network traffic is allowed by this security group.
So the first line allows port 80 TCP which is HTTP and it allows that to connect to the instance from any source IP address specifically IP version 4.
We can tell it's IP version 4 because it's 0.0.0.0/0 which represents any IP version 4 address.
Next we allow port 22 using TCP and again using the IP version 4 any IP match and this is the entry which allows SSH to connect into this instance using IP version 4.
And then lastly we have a corresponding line which allows SSH using IP version 6.
So we're allowing any IP address to connect using SSH to this EC2 instance.
And so connecting to it using SSH is relatively simple.
We can right click on this instance and select connect and then choose SSH client and AWS provides us with all of the relevant information.
Now note how under step number three we have this line which is chmod space 400 space a4l.pm.
I want to demonstrate what happens if we attempt to connect without changing the permissions on this key file.
So to do that right at the bottom is an example command to connect to this instance.
So just copy that into your clipboard.
Then I want you to move to your command prompt or terminal.
In my case I'm running macOS so I'm using a terminal application.
Then you'll need to move to the folder where you have the PEM file stored or where you just downloaded it in one of the previous steps.
I'm going to paste in that command which I just copied onto my clipboard.
This is going to use the a4l.pm file as the identity information and then it's going to connect to the instance using the EC2-user local Linux user.
And this is the host name that it's going to connect to.
So this is my EC2 instance.
Now I'm going to press enter and attempt that connection.
First it will ask me to verify the authenticity of this server.
So this is an added security method.
This is getting the fingerprint of this EC2 instance.
And it means that if we independently have a copy of this fingerprint, say from the administrator of the server that we're connecting to, then we can verify that we're connecting to that same server.
Because it's possible that somebody could exploit DNS and replace a legitimate DNS name with one which points at a non-legitimate server.
So that's important.
You can't always rely on a DNS name.
DNS names can be adjusted to point at different IP addresses.
So this fingerprint is a method that you can use to verify that you're actually connecting to the machine or the instance which you think you are.
Now in this case, because we've just created this EC2 instance, we can be relatively certain that it is valid.
So we're just going to go ahead and type yes and press enter.
And then it will try to connect to this instance.
Now immediately in my case, I got an error.
And this error is going to be similar if you're using macOS or Linux.
If you're using Windows, then there is a chance that you will get this error or won't.
And if you do get it, it might look slightly different.
But look for the keyword of permissions.
If you see that you have a permissions problem with your key, then that's the same error as I'm showing on my screen now.
Basically what this means is that the SSH client likes it when the permissions on these keys are restricted, restricted to only the user that they belong to.
Now in my case, the permissions on this file are 644.
And this represents my user, my group, and then everybody.
So this means this key is accessible to other users on my local system.
And that's far too open to be safe when using local SSH.
Now in Windows, you might have a similar situation where other users of your local machine have read permissions on this file.
What this error is telling us to do is to correct those permissions.
So if we go back to the AWS console, this is the command that we need to run to correct those permissions.
So copy that into your clipboard, move back to your terminal, paste that in, and press enter.
And that will correct those permissions.
Now under Windows, the process is that you need to edit the permissions of that file.
So right click properties and then edit the security.
And you need to remove any user access to that file other than your local user.
And that's the same process that we've just done here, only in Windows it's GUI based.
And under Mac OS or Linux, you use CHmod.
So now that we've adjusted those permissions, if I use the up arrow to go back to the previous command and press enter, I'm able to connect to the CC2 instance.
And that's using the SSH client.
To use the SSH client, you need to have network connectivity to the CC2 instance.
And you need to have a valid SSH key pair.
So you need the key stored on your local machine.
Now this can present scalability issues because if you need to have a large team having access to this instance, then everybody in that team need a copy of this key.
And so that does present admin problems if you're doing it at scale.
Now in addition to this, because you're connecting using an SSH client from your local machine, you need to make sure that the security group of this instance allows connections from your local machines.
So in this case, it allows connections from any source IP address into this instance.
And so that's valid for my IP address.
You need to make sure that the security group on whichever instance you're attempting to connect to allows your IP address as a minimum.
Now another method that you can use to connect to EC2 is EC2 instance connect.
Now to use that, we right click, we select connect, and we have a number of options at the top.
One of these is the SSH client that we've just used.
Another one is EC2 instance connect.
So if we select this option, we're able to connect to this instance.
It shows us the instance ID, it shows us the public IP address, and it shows us the user to connect into this instance with.
Now AWS attempt to automatically determine the correct user to use.
So when you launch an instance using one of the default AMIs, then it tends to pick correctly.
However, if you generate your own custom AMI, it often doesn't guess correctly.
And so you need to make sure that you're using the correct username when connecting using this method.
But once you've got the correct username, you can just go ahead and click on connect, and then it will open a connection to that instance using your web browser.
It'll take a few moments to connect, but once it has connected, you'll be placed at the terminal of this EC2 instance in exactly the same way as you were when using your local SSH.
Now one difference you might have noticed is at no point where you prompted to provide a key.
When you're using EC2 instance connect, you're using AWS permissions to connect into this instance.
So because we're logged in using an admin user, we have those permissions, but you do need relevant permissions added to the identity of whoever is using instance connect to be able to connect into the instance.
So this is managed using identity policies on the user, the group or the role, which is attempting to access this instance.
Now one important element of this, which I want to demonstrate, if we go back to instances and we select the instance, click on security, and then click on the security group, which is associated with this instance.
Scroll down, click on edit inbound rules, and then I want you to locate the inbound rule for IP version 4 SSH, SSH TCP 22, and then it's using this catchall, so 0.0.0.0/0, which represents any IP version 4 address.
So go ahead and click on the cross to remove that, and then on that same line in the source area, click on this drop down and change it to my IP.
So this is my IP address, yours will be different, but then we're going to go ahead and save that rule.
Now just close down the tab that you've got connected to instance connect, move back to the terminal, and type exit to disconnect from that instance, and then just rerun the previous command.
So connect back to that instance using your local SSH client.
You'll find that it does reconnect because logically enough, this connection is coming from your local IP address, and you've changed the security group to allow connections from that address, so it makes sense that this connection still works.
Moving back to the console though, let's go to the EC2 dashboard, go to running instances, right click on this instance, go to connect, select EC2 instance connect, and then click on connect and just observe what happens.
Now you might have spent a few minutes waiting for this to connect, and you'll note that it doesn't connect.
Now this might seem strange at this point because you're connecting from a web browser, which is running on your local machine.
So it makes sense that if you can connect from your local SSH client, which is also running on your local machine, you should be able to connect using EC2 instance connect.
Now this might seem logical, but the crucial thing about EC2 instance connect is that it's not actually originating connections from your local machine.
What's happening is that you're making a connection through to AWS, and then once your connection arrives at AWS, the EC2 instance connect service is then connecting to the EC2 instance.
Now what you've just done is you've edited the security group of this instance to only allow your local IP address to connect, and this means that the EC2 instance connect service can no longer connect to this instance.
So what you need in order to allow the EC2 instance connect service to work is you either need to allow every source IP address, so 0.0.0.0.0/0, but of course that's bad practice for production usage.
It's much more secure if you go to this URL, and I'll make sure that I include this attached to this lesson.
This is a list of all of the different IP ranges which AWS use for their services.
Now because I have this open in Firefox, it might look a little bit different.
If I just go to raw data, that might look the same as your browser.
If you're using Firefox, you have the ability to open this as a JSON document.
Both of them show the same data, but when it's JSON, you have the ability to collapse these individual components.
But the main point about this document is that this contains a list of all of the different IP addresses which are used in each different region for each different service.
So if we wanted to allow EC2 instance connect for a particular region, then we might search for instance, locate any of these items which have EC2 instance connect as the service, and then just move through them looking for the one which matches the region that we're using.
Now in my case, I'm using US East One, so I'd scroll through all of these IP address ranges looking for US East One.
There we go, I've located it.
It's using this IP address range.
So I might copy this into my clipboard, move back to the EC2 console, select the instance, click on security, select the security group of this instance, scroll down, edit the inbound rules, remove the entry for my IP address, paste in the entry for the EC2 instance connect service, and then save that rule.
And now what you'll find if you move back to your terminal and try to interact with this instance, you might be able to initially because the connection is still established, but if you exit and then attempt to reconnect, this time you'll see that you won't be able to connect because now your local IP address is no longer allowed to connect to this instance.
However, if you move back to the AWS console, go to the dashboard and then instance is running, right click on the instance and put connect, select instance connect and then click on connect.
Now you'll be allowed to connect using EC2 instance connect.
And the reason for that just to reiterate is that you've just edited the security group of this EC2 instance and you've allowed the IP address range of the EC2 instance connect service.
So now you can connect to this instance and you could do so at scale using AWS permissions.
So I just wanted to demonstrate how both of those connection methods work, both instance connect and using a local SSH client.
That's everything I wanted to cover.
So just go ahead and move back to the CloudFormation console, select this stack that you created using the one click deployment, click on delete and then confirm that process.
And that will clear up all of the infrastructure that you've used in this demo lesson.
At this point though, that's everything I wanted to cover.
So go ahead, complete this video and when you're ready, I'll look forward to you joining me in the next.
-
-
klik.gr klik.gr
-
η Αίγυπτος επίσης έκλεισε τα σύνορά της με τη Γάζα
Μιση αληθεια.
Η Αιγυπτος συνηθως εκλεινε τα συνορα με Γαζα ειτε επειτα απο πιεση των ΗΠΑ και του Ισραηλ, ειτε επειτα απο ενοπλες επιθεσεις της Χαμας. Απο το 2021 η Αιγυπτος τα ειχε ανοιξει για παντα.
Ομως παντα τον ελεγχο του τι μπαινει τον ασκουσαν οι Ισραηλινοι (σε συμφωνια με την Αιγυπτο) και διέταζαν τα κλεισιματα, με εφαρμογη των συμφωνιων του Camp David (1979):
The Philadelphi Accord between Israel and Egypt, based on the principles of the 1979 peace treaty, turned over border control to Egypt, while the supply of arms to the Palestinian Authority was subject to Israeli consent.
Under the Agreed Principles for Rafah Crossing, part of the Agreement on Movement and Access (AMA) of 15 November 2005, EUBAM was responsible for monitoring the Border Crossing. The agreement ensured Israel authority to dispute entrance by any person.[14]
...after Hamas' takeover of the Gaza Strip (2007) it was closed permanently except for infrequent limited openings by Egypt.
Απο το 2024 τον ελεγχο του περασματος τον εχουν αμιγως Ισραηλινα στρατευματα, και ειναι κλειστο ακομη και σε ανθρωπιστικη βοηθεια.
-
παρά τις διαρκείς προκλήσεις απ’ τη Γάζα
Η Γαζα ειναι μια ανοιχτη φυλακη οπου απαγορευονται οι διαφορων ειδους εισαγωγες φαρμακων και καθημερινων υλικων, το ψαρεμα, με ελαχιστο νερο, ελαχιστη τροφη, κα, ενω οι επικοι στις γειτονικες πολεις η σε βαρκες απολαμβανουν το γενοκτονικο θεαμα καθε φορα που τους βομβαρδιζουν ("Mowing the grass").
Δεν ειναι "προκλησεις", ειναι αντισταση στον κατακτητη_.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Now this is an overview of all of the different categories of instances, and then for each category, the most popular or current generation types that are available.
Now I created this with the hope that it will help you retain this information.
So this is the type of thing that I would generally print out or keep an electronic copy of and refer to constantly as we go through the course.
By doing so, whenever we talk about particular size and type and generation of instance, if you refer to the details in notes column, you'll be able to start making a mental association between the type and then what additional features you get.
So for example, if we look at the general error post category, we've got three main entries in that category.
We've got the A1 and M6G types, and these are a specific type of instance that are based on ARM processors.
So the A1 uses the AWS designed Graviton ARM processor, and the M6G uses the generation 2, so Graviton 2 ARM based processor.
And using ARM based processors, as long as you've got operating systems and applications that can run under the architecture, they can be very efficient.
So you can use smaller instances with lower cost and achieve really great levels of performance.
The T3 and T3A instance types, they're burstable instances.
So the assumption with those type of instances is that your normal CPU load will be fairly low, and you have an allocation of burst credits that allows you to burst up to higher levels occasionally, but then return to that normally low CPU level.
So this type of instance, T3 and T3A, are really good for machines which have low normal loads with occasional bursts, and they're a lot cheaper than the other type of general purpose instances.
Then we've got M5, M5A and M5N.
So M5 is your starting point, M5A uses the AMD architecture, whereas normal M5s just use Intel, and these are your steady state general instances.
So if you don't have a burst requirement, if you're running a certain type of application server, which requires consistent steady state CPU, then you might use the M5 type.
So maybe a heavily used exchange email server that runs normally at 60% CPU utilization, that might be a good candidate for M5.
But if you've got a domain controller or an email relay server that normally runs maybe at 2%, 3% with occasional burst, up to 20% or 30% or 40%, then you might want to run a T type instance.
Now, not to go through all of these in detail, we've got the computer optimized category with the C5 and C5N, and they go for media encoding, scientific modeling, gaming servers, general machine learning.
For memory optimized, we start off with R5 and R5A.
If you want to use really large in-memory applications, you've got the X1 and the X1E.
If you want the highest memory of all A to the U instances, you've got the high memory series.
You've got the Z1D, which comes with large memory and NVMe storage.
Then Accelerate Computing, these are the ones that come with these additional capabilities.
So the P3 type and G4 type, those come with different types of GPUs.
So the P type is great for parallel processing and machine learning.
The P type is kind of okay for machine learning and much better for graphics intensive requirements.
You've got the F1 type, which comes with field programmable gate rays, which is great for genomics, financial analysis and big data, anything where you want to program the hardware to do specific tasks.
You've got the Inf1 type, which is relatively new, custom designed for machine learning, so recommendation for casting analysis, voice conversation, anything machine learning related, look at using that type, and then storage optimalities.
So these come with high speed, local storage, and depending on the type you pick, you can get high throughput or maximum IO or somewhere in between.
So keep this somewhere safe, printed out, keep it electronically, and as we go through the course and use the different type of instances, refer to this and start making the mental association between what a category is, what instance types are in that category, and then what benefits they provide.
Now again, don't worry about memorizing all of this in the exam, you don't need it, I'll draw out anything specific that you need as we go through the course, but just try to get a feel for which letters are in which categories.
If that's the minimum that you can do, if I can give you a letter like the T type, or the C type, or the R type, if you can try and understand the mental association which category that goes into, that will be a great step.
And there are ways we can do this, we can make these associations, so C stands for compute, R stands for RAM, which is a way for describing memory, we've got I which stands for IO, D which stands for dense storage, G which stands for GPU, P which stands for parallel processing, there's lots of different mind tricks and mental association that we can do, and as we go through the course, I'll try and help you with that, but as a minimum, either print this out or store it somewhere safe, and refer to it as we go through the course.
The key thing to understand though is how picking an instance type is specific to a particular type of computing scenario.
So if you've got an application that requires maximum CPU, look at compute optimized, if you need memory, look at memory optimized, if you've got a specific type of acceleration, look at accelerated computing, start off in the general purpose instance types, and then go out from there as you've got a particular requirement to.
Now before we finish up, I did want to demonstrate two really useful sites that I refer to constantly, I'll include links to both of these in a lesson text.
The first one is the Amazon documentation site for Amazon EC2 instance types, this gives you a follow-up view of all the different categories of EC2 instances.
You can look in a category, a particular family and generation of instance, so T3, and then in there you can see the use cases that this is suited to, any particular features, and then a list of each instance size and exactly what allocation of resources that you get and then any particular notes that you need to be aware of.
So this is definitely something you should refer to constantly, especially if you're selecting instances to use for production usage.
This other website is something similar, it's EC2incidences.info, and it provides a really great sortable list which can be filtered and adjusted with different attributes and columns, which give you an overview of exactly what each instance provides.
So you can either search for a particular type of instance, maybe a T3, and then see all the different sizes and capabilities of T3, as well as that you can see the different costings for those instance types, so Linux on demand, Linux reserve, Windows on demand, Windows reserve, and we'll talk about what this reserve column is later in the course.
You can also click on columns and show different data for these different instance types, so if I scroll down you can see which offer EBS optimization, you can see which operating systems these different instances are compatible with, you've got a lot of options to manipulate this data.
I find this to be one of the most useful third-party sites, I always refer back to this when I'm doing any consultancy, so this is a really great site.
And again it will go into the lesson text so definitely as you're going through the course, experiments and have a play around with this data, and just start to get familiar with the different capabilities of the different types of EC2 instances.
With that being said, that's everything I wanted to cover in this lesson, you've done really well, and there's been a lot of theory, but it will come in handy in the exam and real-world version usage.
So go ahead, complete this video, and when you're ready, you can join me in the next.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“Design justice is a framework for analysis of how design distributes benefits and burdens between various groups of people. Design justice focuses explicitly on the ways that design reproduces and/or challenges the matrix of domination (white supremacy, heteropatriarchy, capitalism, ableism, settler colonialism, and other forms of structural inequality).”
Although I haven't heard this term, I believed in this idea wholeheartedly. It's hard to truly understand someone's perspective and no one understands a perspective than someone living it. It is important to receive feedback from all different types of people to create the best product possible.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I'm going to talk about the various different types of EC2 instances.
I've described an EC2 instance before as an operating system plus an allocation of resources.
Well, by selecting an instance type and size, you have granular control over what that resource configuration is, picking appropriate resource amounts and instance capabilities to mean the difference between a well-performing system and one which causes a bad customer experience.
Don't expect this lesson though to give you all the answers.
Understanding instance types is something which will guide your decision-making process.
Given a situation, two AWS people might select two different instance types for the same implementation.
The key takeaway from this lesson will be that you don't make any bad decisions and you have an awareness of the strengths and weaknesses of the different types of instances.
Now, I've seen this occasionally feature on the exam in a form where you're presented with a performance problem and one answer is to change the instance type.
So, to minimum with this lesson, I'd like you to be able to answer that type of question.
So, know for example whether a C type instance is better in a certain situation than an M type instance.
If that's what I want to achieve, we've got a lot to get through, so let's get started.
At a really high level, when you choose an EC2 instance type, you're doing so to influence a few different things.
First, logically, the raw amount of resources that you get.
So, that's virtual CPU, memory, local storage capacity and the type of that storage.
But beyond the raw amount, it's also the ratios.
Some type of instances give you more of one and less of the other.
Instance types suited to compute applications, for instance, might give you more CPU and less memory for a given dollar spend.
An instance designed for in-memory caching might be the reverse.
They prioritize memory and give you lots of that for every dollar that you spend.
Picking instance types and sizes, of course, influences the raw amount that you pay per minute.
So, you need to keep that in mind.
I'm going to demonstrate a number of tools that will help you visualize how much something's going to cost, as well as what features you get with it.
So, look at that at the end of the lesson.
The instance type also influences the amount of network bandwidth for storage and data networking capability that you get.
So, this is really important.
When we move on to talking about elastic block store, for example, that's a network-based storage product in AWS.
And so, for certain situations, you might provision volumes with a really high level of performance.
But if you don't select an instance appropriately and pick something that doesn't provide enough storage network bandwidth, then the instance itself will be the limiting factor.
So, you need to make sure you're aware of the different types of performance that you'll get from the different instances.
Picking an instance type also influences the architecture of the hardware that the instance has run on and potentially the vendor.
So, you might be looking at the difference between an ARM architecture or an X86 architecture.
You might be picking an instance type that provides Intel-based CPUs or AMD CPUs.
Instance type selection can influence in a very nuanced and granular way exactly what hardware you get access to.
Picking an appropriate type of instance also influences any additional features and capabilities that you get with that instance.
And this might be things such as GPUs for graphics processing or FPGAs, which are field-programmeable gator-rays.
And if you think of these as a special type of CPU that you can program the hardware to perform exactly how you want.
So, it's a super customizable piece of compute hardware.
And so, certain types of instances come up with these additional capabilities.
So, it might come with an allocation of GPUs or it might come with a certain capacity of FPGAs.
And some instance types don't come with either.
You need to learn which to pick for a given type of workload.
Easy to instance is a group into five main categories which help you select an instance type based on a certain type of workload.
But we've got five main categories.
The first is general purpose.
And this is and always should be your starting point.
Instances which fall into this category are designed for your default steady-state workloads.
They've got fairly even resource ratios, so generally assigned in an appropriate way.
So, for a given type of workload, you get an appropriate amount of CPU and a certain amount of memory which matches that amount of CPU.
So, instances in the general purpose category should be used as your default and you only move away from that if you've got a specific workload requirement.
We've also got the compute optimized category and instances that are in this category are designed for media processing, high-performance computing, scientific modeling, gaming, machine learning.
And they provide access to the latest high-performance CPUs.
And they generally offer a ratio and more CPU is offered in memory for a given price point.
The memory optimized category is logically the inverse of this, so offering large memory allocations for a given dollar or CPU amount.
This category is ideal for applications which need to work with large in-memory data sets, maybe in-memory caching or some other specific types of database workloads.
The accelerated computing category is where these additional capabilities come into play, such as dedicated GPUs for high-scale parallel processing and modeling, or the custom programmable hardware, such as FPGAs.
Now, these are niche, but if you're in one of the situations where you need them, then you know you need them.
So, when you've got specific niche requirements, the instance type you need to select is often in the accelerated computing category.
Finally, there's the storage optimized category and instances in this category generally provide large amounts of superfast local storage, either designed for high sequential transfer rates or to provide massive amounts of IO operations per second.
And this category is great for applications with serious demands on sequential and random IO, so things like data warehousing, elastic search, and certain types of analytic workloads.
Now, one of the most confusing things about EC2 is the naming scheme of the instance types.
This is an example of a type of EC2 instance.
While it might initially look frustrating, once you understand it, it's not that difficult to understand.
So, while our friend Bob is a bit frustrated at understanding difficulty, understanding exactly what this means, by the end of this part of the lesson, you will understand how to decode EC2 instance types.
The whole thing, end to end, so R5, DN, .8x, large, this is known as the instance type.
The whole thing is the instance type.
If a member of your operations team asks you what instance you need or what instance type you need, if you use the full instance type, you unambiguously communicate exactly what you need.
It's a mouthful to say R5, DN, .8x, large, but it's precise and we like precision.
So, when in doubt, always give the full instance type an answer to any question.
The letter at the start is the instance family.
Now, there are lots of examples of this, the T family, the M family, the I family, and the R family.
There's lots more, but each of these are designed for a specific type or types of computing.
Nobody expects you to remember all the details of all of these different families, but if you can start to try to remember the important ones, I'll mention these as we go through the course, then it will put you in a great position in the exam.
If you do have any questions where you need to identify if an instance type is used appropriately or not, as we go through the course and I give demonstrations which might be using different instance families, I will be giving you an overview of their strengths and their weaknesses.
The next part is the generation.
So, the number five in this case is the generation.
AWS iterate often.
So, if you see instance type starting with R5 or C4 as two examples, the C or the R, as you now know, is the instance family and the number is the generation.
So, the C4, for example, is the fourth generation of the C family of instance.
That might be the current generation, but then AWS come along and replace it with the C5, which is generation five, the fifth generation, which might bring with it better hardware and better price to performance.
Generally, with AWS, always select the most recent generation.
It almost always provides the best price to performance option.
The only real reason is not to immediately use the latest generation, as if it's not available in your particular region or if your business has fairly rigorous test processes that need to be completed before you get the approval to use a particular new type of instance.
So, that's the R-part cupboard, which is the family, and the five-part cupboard, which is the generation.
Now, across to the other side, we've got the size.
So, in this case, 8x large or 8x large, this is the instance size.
Within a family and a generation, there are always multiple sizes of that family and generation, which determine how much memory and how much CPU the instance is allocated with.
Now, there's a logical and often linear relationship between these sizes.
So, depending on the family and generation, the starting point can be anywhere as small as the nano.
Next to the nano, there's micro, then small, then medium, large, extra large, 2x large, 4x large, 8x large, and so on.
Now, keep in mind, there's often a price premium towards the higher end.
So, it's often better to scale systems by using a larger number of smaller instance sizes.
But more on that later when we talk about high availability and scaling.
Just be aware, as far as this section of the course goes, that for a given instance family and generation, you're able to select from multiple different sizes.
Now, the bit which is in the middle, this can vary.
There might be no letters between the generation and size, but there's often a collection of letters which denote additional capabilities.
Common examples include a lowercase a, which signifies amdcpu, so lowercase b, which signifies NVMe storage, lowercase n, which signifies network optimized, lowercase e, for extra capacity, which could be RAM or storage.
So, these additional capabilities are not things that you need to memorize, but as you get experience using AWS, you should definitely try to mentally associate them in your mind with what extra capabilities they provide.
Because time is limited in an exam, the more that you can commit to memory than know instinctively, the better you'll be.
Okay, so this is the end of part one of this lesson.
It was getting a little bit on the long side, and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So, go ahead, complete the video, and when you're ready, join me in part two.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
The study from Frank and colleagues reports potentially important cryo-EM observations of mouse glutamatergic synapses isolated from adult mammalian brains. The authors used a combination of mouse genetics to generate PSD95-GFP labeling in vivo, a rapid synaptosome isolation and cryo-protectant strategy, and cryogenic correlated light-electron microscopy (cryoCLEM) to record tomograms of synapses, which together provide convincing support for their conclusions. Controversially, the authors report that forebrain glutamatergic synapses do not contain postsynaptic "densities" (PSD), a defining feature of synapse structure identified in chemically-fixed and resin-embedded brain samples. The work questions a long-standing concept in neurobiology and is primarily of interest to specialists in synaptic structure and function.
-
-
engl252fa24.commons.gc.cuny.edu engl252fa24.commons.gc.cuny.edu
-
Thusly, the politics–often explicitly stated by Butler’s characters or embedded within Mutu’s visual fields–are irreducible to the language of citizenship, cultural particularity, and national governance as we currently conceive of it.
this sets up fraziers argument on the idea that the politics represented in both works are more complicated and cant be 'simply' summarized using familiar terms, which I do really agree with. Butler's characters express these ideas directly. In parable of the sower, Lauren with her very outspoken nature on serious real life issues, through Lauren’s own philosophy of Earthseed, Butler creates a new perspective of understanding survival, leadership, and responsibility that goes above the more conventional political discourse both during the period it was published and currently. Lauren’s voice in the novel acts as a critique of already existing political systems while offering an alternative route that majorly reflects an intense comprehension of power, government, and aligence based in adaptability and inclusivity. while Mutu's visual work conveys them more implicitly, really resisting the idea of being put in a box.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, now that we've covered virtualization at a high level, I want to focus on the architecture of the EC2 product in more detail.
EC2 is one of the services you'll use most often in AWS since one which features on a lot of exam questions.
So let's get started.
First thing, let's cover some key, high level architectural points about EC2.
EC2 instances are virtual machines, so this means an operating system plus an allocation of resources such as virtual CPU, memory, potential some local storage, maybe some network storage, and access to other hardware such as networking and graphics processing units.
EC2 instances run on EC2 hosts, and these are physical servers hardware which AWS manages.
These hosts are either shared hosts or dedicated hosts.
Shared hosts are hosts which are shared across different AWS customers, so you don't get any ownership of the hardware and you pay for the individual instances based on how long you run them for and what resources they have allocated.
It's important to understand, though, that every customer when using shared hosts are isolated from each other, so there's no visibility of it being shared.
There's no interaction between different customers, even if you're using the same shared host.
And shared hosts are the default.
With dedicated hosts, you're paying for the entire host, not the instances which run on it.
It's yours.
It's dedicated to your account, and you don't have to share it with any other customers.
So if you pay for a dedicated host, you pay for that entire host, you don't pay for any instances running on it, and you don't share it with other AWS customers.
EC2 is an availability zone resilient service.
The reason for this is that hosts themselves run inside a single availability zone.
So if that availability zone fails, the hosts inside that availability zone could fail, and any instances running on any hosts that fail will themselves fail.
So as a solutions architect, you have to assume if an AZ fails, then at least some and probably all of the instances that are running inside that availability zone will also fail or be heavily impacted.
Now let's look at how this looks visually.
So this is a simplification of the US East One region.
I've only got two AZs represented, AZA and AZB.
And in AZA, I've represented that I've got two subnet, subnet A and subnet B.
Now inside each of these availability zones is an EC2 host.
Now these EC2 hosts, they run within a single AZ.
I'm going to keep repeating that because it's critical for the exam and you're thinking about EC2 in the exam.
Keep thinking about it being an AZ resilient service.
If you see EC2 mentioned in an exam, see if you can locate the availability zone details because that might factor into the correct answer.
Now EC2 hosts have some local hardware, logically CPU and memory, which you should be aware of, but also they have some local storage called the instance store.
The instance store is temporary.
If an instance is running on a particular host, depending on the type of the instance, it might be able to utilize this instance store.
But if the instance moves off this host to another one, then that storage is lost.
And they also have two types of networking, storage networking and data networking.
When instances are provisioned into a specific subnet within a VPC, what's actually happening is that a primary elastic network interface is provisioned in a subnet, which maps to the physical hardware on the EC2 host.
Remember, subnets are also in one specific availability zone.
Instances can have multiple network interfaces, even in different subnets, as long as they're in the same availability zone.
Everything about EC2 is focused around this architecture, the fact that it runs in one specific availability zone.
Now EC2 can make use of remote storage so an EC2 host can connect to the elastic block store, which is known as EBS.
The elastic block store service also runs inside a specific availability zone.
So the service running inside availability zone A is different than the one running inside availability zone B, and you can't access them cross zone.
EBS lets you allocate volumes and volumes of portions of persistent storage, and these can be allocated to instances in the same availability zone.
So again, it's another area where the availability zone matters.
What I'm trying to do by keeping repeating availability zone over and over again is to paint a picture of a service which is very reliant on the availability zone that it's running in.
The host is in an availability zone.
The network is per availability zone.
The persistent storage is per availability zone.
Even availability zone in AWS experiences major issues, it impacts all of those things.
Now an instance runs on a specific host, and if you restart the instance, it will stay on a host.
Instances stay on a host until one of two things happen.
Firstly, the host fails or is taken down for maintenance for some reason by AWS.
Or secondly, if an instance is stopped and then started, and that's different than just restarting, so I'm focusing on an instance being stopped and then being started, so not just a restart.
If either of those things happen, then an instance will be relocated to another host, but that host will also be in the same availability zone.
Instances cannot natively move between availability zones.
Everything about them, their hardware, networking and storage is locked inside one specific availability zone.
Now there are ways you can do a migration, but it essentially means taking a copy of an instance and creating a brand new one in a different availability zone, and I'll be covering that later in this section where I talk about snapshots and AMIs.
What you can never do is connect network interfaces or EBS storage located in one availability zone to an EC2 instance located in another.
EC2 and EBS are both availability zone services.
They're isolated.
You cannot cross AZs with instances or with EBS volumes.
Now instances running on an EC2 host share the resources of that host.
And instances of different sizes can share a host, but generally instances of the same type and generation will occupy the same host.
And I'll be talking in much more detail about instance types and sizes and generations in a lesson that's coming up very soon.
But when you think about an EC2 host, think that it's from a certain year and includes a certain class of processor and a certain type of memory and a certain type and configuration of storage.
And instances are also created with different generations, different versions that you apply specific types of CPU memory and storage.
So it's logical that if you provision two different types of instances, they may well end up on two different types of hosts.
So a host generally has lots of different instances from different customers of the same type, but different sizes.
So before we finish up this lesson, I want to answer a question.
That question is what's EC2 good for?
So what types of situations might you use EC2 for?
And this is equally valuable when you're evaluating a technical architecture while you're answering questions in the exam.
So first, EC2 is great when you've got a traditional OS and application compute need.
So if you've got an application that requires to be running on a certain operating system at a certain runtime with certain configuration, maybe your internal technical staff are used to that configuration, or maybe your vendor has a certain set of support requirements.
EC2 is a perfect use case for this type of scenario.
And it's also great for any long running compute needs.
There are lots of other services inside AWS that provide compute services, but many of these have got runtime limits.
So you can't leave these things running consistently for one year or two years.
With EC2, it's designed for persistent, long running compute requirements.
So if you have an application that runs constantly 24/7, 365, and needs to be running on a normal operating system, Linux or Windows, then EC2 is the default and obvious choice for this.
If you have any applications, which is server style applications, so traditional applications they expect to be running in an operating system, waiting for incoming connections, then again, EC2 is a perfect service for this.
And it's perfect for any applications or services that need burst requirements or steady state requirements.
There are different types of EC2 instances, which are suitable for low levels of normal loads with occasional bursts, as well as steady state load.
So again, if your application needs an operating system, and it's not bursty needs or consistent steady state load, then EC2 should be the first thing that you review.
EC2 is also great for monolithic application stack.
So if your monolithic application requires certain components, a stack, maybe a database, maybe some middleware, maybe other runtime based components, and especially if it needs to be running on a traditional operating system, EC2 should be the first thing that you look at.
And EC2 is also ideally suited for migrating application workloads, so application workloads, which expect a traditional virtual machine or server style environment, or if you're performing disaster recovery.
So if you have existing traditional systems which run on virtual servers, and you want to provision a disaster recovery environment, then EC2 is perfect for that.
In general, EC2 tends to be the default compute service within AWS.
There are lots of niche requirements that you might have.
And if you do have those, there are other compute services such as the elastic container service or Lambda.
But generally, if you've got traditional style workloads, or you're looking for something that's consistent, or if it requires an operating system, or if it's monolithic, or if you migrated into AWS, then EC2 is a great default first option.
Now in this section of the course, I'm covering the basic architectural components of EC2.
So I'm gonna be introducing the basics and let you get some exposure to it, and I'm gonna be teaching you all the things that you'll need for the exam.
-
-
docdrop.org docdrop.org
-
Lost really has two disparate meanings. Losing things is about the familiar falling away, getting lost is about the unfamiliar appearing.
I like this statement; I never thought of it this way. That losing and getting lost are very different. Losing is mostly negative, but getting lost could result in great outcomes.
-
to be lost is to be fully present, and to be fully present is to be capable of being in uncertainty and mystery.
"Lost" is not necessarily negative. Being lost makes you fully present because you are trying to find a way from the lost state. It becomes a part of discovery.
-
-
dev.omeka.org dev.omeka.org
-
News
Fundamentally, the content space is just an HTML block (or multiple), yes?
-
-
dev.omeka.org dev.omeka.org
-
Places
Yale ppl: Do we want any suggestions pre-populated? Omeka: Can this page be brought up with the map pre-loaded to a particular place?
-
-
dev.omeka.org dev.omeka.org
-
About
Ignorant Q: Is this literally all the structure of the page, or are you assuming we will add any other sections we want?
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this first lesson of the EC2 section of the course, I want to cover the basics of virtualization as briefly as possible.
EC2 provides virtualization as a service.
It's an infrastructure as a service or I/O product.
To understand all the value it provides and why some of the features work the way that they do, understanding the fundamentals of virtualization is essential.
So that's what this lesson aims to do.
Now, I want to be super clear about one thing.
This is an introduction level lesson.
There's a lot more to virtualization than I can talk about in this brief lesson.
This lesson is just enough to get you started, but I will include a lot of links in the lesson description if you want to learn more.
So let's get started.
We do have a fair amount of theory to get through, but I promise when it comes to understanding how EC2 actually works, this lesson will be really beneficial.
Virtualization is the process of running more than one operating system on a piece of physical hardware, a server.
Before virtualization, the architecture looked something like this.
A server had a collection of physical resources, so CPU and memory, network cards and maybe other logical devices such as storage.
And on top of this runs a special piece of software known as an operating system.
That operating system runs with a special level of access to the hardware.
It runs in privilege mode, or more specifically, a small part of the operating system runs in privilege mode, known as the kernel.
The kernel is the only part of the operating system, the only piece of software on the server that's able to directly interact with the hardware.
Some of the operating system doesn't need this privilege level of access, but some of it does.
Now, the operating system can allow other software to run such as applications, but these run in user mode or unprivileged mode.
They cannot directly interact with the hardware, they have to go through the operating system.
So if Bob or Julie are attempting to do something with an application, which needs to use the system hardware, that application needs to go through the operating system.
It needs to make a system call.
If anything but the operating system attempts to make a privileged call, so tries to interact with the hardware directly, the system will detect it and cause a system-wide error, generally crashing the whole system or at minimum the application.
This is how it works without virtualization.
Virtualization is how this is changed into this.
A single piece of hardware running multiple operating systems.
Each operating system is separate, each runs its own applications.
But there's a problem, CPU at least at this point in time, could only have one thing running as privileged.
A privileged process member has direct access to the hardware.
And all of these operating systems, if they're running in their unmodified state, they expect to be running on their own in a privileged state.
They contain privileged instructions.
And so trying to run three or four or more different operating systems in this way will cause system crashes.
Virtualization was created as a solution to this problem, allowing multiple different privileged applications to run on the same hardware.
But initially, virtualization was really inefficient, because the hardware wasn't aware of it.
Virtualization had to be done in software, and it was done in one of two ways.
The first type was known as emulated virtualization or software virtualization.
With this method, a host operating system still ran on the hardware and included additional capability known as a hypervisor.
The software ran in privileged mode, and so it had full access to the hardware on the host server.
Now, around the multiple other operating systems, which we'll now refer to as guest operating systems, were wrapped a container of sorts called a virtual machine.
Each virtual machine was an unmodified operating system, such as Windows or Linux, with a virtual allocation of resources such as CPU, memory and local disk space.
Virtual machines also had devices mapped into them, such as network cards, graphics cards and other local devices such as storage.
The guest operating systems believed these to be real.
They had drivers installed, just like physical devices, but they weren't real hardware.
They were all emulated, fake information provided by the hypervisor to make the guest operating systems believe that they were real.
The crucial thing to understand about emulator virtualization is that the guest operating systems still believe that they were running on real hardware, and so they still attempt to make privileged calls.
They tried to take control of the CPU, they tried to directly read and write to what they think of as their memory and their disk, which are actually not real, they're just areas of physical memory and disk that have been allocated to them by the hypervisor.
Without special arrangements, the system would at best crash, and at worst, all of the guests would be overriding each other's memory and disk areas.
So the hypervisor, it performs a process known as binary translation.
Any privileged operations which the guests attempt to make, they're intercepted and translated on the fly in software by the hypervisor.
Now, the binary translation in software is the key part of this.
It means that the guest operating systems need no modification, but it's really, really slow.
It can actually halve the speed of the guest operating systems or even worse.
Emulated virtualization was a cool set of features for its time, but it never achieved widespread adoption for demanding workloads because of this performance penalty.
But there was another way that virtualization was initially handled, and this is called para-virtualization.
With para-virtualization, the guest operating systems are still running in the same virtual machine containers with virtual resources allocated to them, but instead of the slow binary translation which is done by the hypervisor, another approach is used.
Para-virtualization only works on a small subset of operating systems, operating systems which can be modified.
Because with para-virtualization, there are areas of the guest operating systems which attempt to make privileged calls, and these are modified.
They're modified to make them user calls, but instead of directly calling on the hardware, they're calls to the hypervisor called hypercalls.
So areas of the operating systems which would traditionally make privileged calls directly to the hardware, they're actually modified.
So the source code of the operating system is modified to call the hypervisor rather than the hardware.
So the operating systems now need to be modified specifically for the particular hypervisor that's in use.
It's no longer just generic virtualization, the operating systems are modified for the particular vendor performing this para-virtualization.
By modifying the operating system this way, and using para-virtual drivers in the operating system for network cards and storage, it means that the operating system became almost virtualization aware, and this massively improved performance.
But it was still a set of software processors designed to trick the operating system and/or the hardware into believing that nothing had changed.
The major improvement in virtualization came when the physical hardware started to become virtualization aware.
This allows for hardware virtualization, also known as hardware assisted virtualization.
With hardware assisted virtualization, hardware itself has become virtualization aware.
The CPU contains specific instructions and capabilities so that the hypervisor can directly control and configure this support, so the CPU itself is aware that it's performing virtualization.
Essentially, the CPU knows that virtualization exists.
What this means is that when guest operating systems attempt to run any privileged instructions, they're trapped by the CPU, which knows to expect them from these guest operating systems, so the system as a whole doesn't halt.
But these instructions can't be executed as is because the guest operating system still thinks that it's running directly on the hardware, and so they're redirected to the hypervisor by the hardware.
The hypervisor handles how these are executed.
And this means very little performance degradation over running the operating system directly on the hardware.
The problem, though, is while this method does help a lot, what actually matters about a virtual machine tends to be the input/output operation, so network transfer and disk I/O.
The virtual machines, they have what they think is physical hardware, for example, a network card.
But these cards are just logical devices using a driver, which actually connect back to a single physical piece of hardware which sits in the host.
The hardware, everything is running on.
Unless you have a physical network card per virtual machine, there's always going to be some level of software getting in the way, and when you're performing highly transactional activities such as network I/O or disk I/O, this really impacts performance, and it consumes a lot of CPU cycles on the host.
The final iteration that I want to talk about is where the hardware devices themselves become virtualization aware, such as network cards.
This process is called S-R-I-O-V, single root I/O virtualization.
Now, I could talk about this process for hours about exactly what it does and how it works, because it's a very complex and feature-rich set of standards.
But at a very high level, it allows a network card or any other add-on card to present itself, not just one single card, but almost a several mini-cards.
Because this is supported in hardware, these are fully unique cards, as far as the hardware is concerned, and these are directly presented to the guest operating system as real cards dedicated for its use.
And this means no translation has to happen by the hypervisor.
The guest operating system can directly use its card whenever it wants.
Now, the physical card which supports S-R-I-O-V, it handles this process end-to-end.
It makes sure that when the guest operating system is used, there are logical mini-network cards that they have physical access to the physical network connection when required.
In EC2, this feature is called enhanced networking, and it means that the network performance is massively improved.
It means faster speeds.
It means lower latency.
And more importantly, it means consistent lower latency, even at high loads.
It means less CPU usage for the host CPU, even when all of the guest operating systems are consuming high amounts of consistent I/O.
Many of the features that you'll see EC2 using are actually based on AWS implementing some of the more advanced virtualization techniques that have been developed across the industry.
AWS do have their own hypervisor stack now called Nitro, and I'll be talking about that in much more detail in an upcoming lesson, because that's what enables a lot of the higher-end EC2 features.
But that's all the theory I wanted to cover.
I just wanted to introduce virtualization at a high level and get you to the point where you understand what S-R-I-O-V is, because S-R-I-O-V is used for enhanced networking right now, but it's also a feature that can be used outside of just network cards.
It can help hardware manufacturers design cards, which, whilst they're a physical single card, can be split up into logical cards that can be presented to guest operating systems.
It essentially makes any hardware virtualization aware, and any of the advanced EC2 features that you'll come across within this course will be taking advantage of S-R-I-O-V.
At this point, though, we've completed all of the theory I wanted to cover, so go ahead, complete the slicing when you're ready.
You can join me in the next.
-
-
www.pb.uillinois.edu www.pb.uillinois.edu
-
Indiana University has School of Medicine located in Purdue University West Lafayette campus.
I wonder what's going to happen with this school of medicine with the dissolution of IUPUI.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important study provides new and nuanced insights into the evolution of morphs in a textbook example of Batesian mimicry. The evidence supporting the claims about the origin and dominance relationships among morphs is convincing, but the interpretation of signals needs improvement with complementary analysis and some nuanced interpretation. Pending a revision, this work will be of interest to a broad range of evolutionary biologists.
-
Reviewer #1 (Public review):
In this study, Deshmukh et al. provide an elegant illustration of Haldane's sieve, the population genetics concept stating that novel advantageous alleles are more likely to fix if dominant because dominant alleles are more readily exposed to selection. To achieve this, the authors rely on a uniquely suited study system, the female-polymorphic butterfly Papilio polytes.
Deshmukh et al. first reconstruct the chronology of allele evolution in the P. polytes species group, clearly establishing the non-mimetic cyrus allele as ancestral, followed by the origin of the mimetic allele polytes/theseus, via a previously characterized inversion of the dsx locus, and most recently, the origin of the romulus allele in the P. polytes lineage, after its split from P. javanus. The authors then examine the two crucial predictions of Haldane's sieve, using the three alleles of P. polytes (cyrus, polytes, and romulus). First, they report with compelling evidence that these alleles are sequentially dominant, or put in other words, novel adaptive alleles either are or quickly become dominant upon their origin. Second, the authors find a robust signature of positive selection at the dsx locus, across all five species that share the polytes allele.
In addition to exquisitely exemplifying Haldane's sieve, this study characterizes the genetic differences (or lack thereof) between mimetic alleles at the dsx locus. Remarkably, the polytes and romulus alleles are profoundly differentiated, despite their short divergence time (< 0.5 my), whereas the polytes and theseus alleles are indistinguishable across both coding and intronic sequences of dsx. Finally, the study reports incidental evidence of exon swaps between the polytes and romulus alleles. These exon swaps caused intermediate colour patterns and suggest that (rare) recombination might be a mechanism by which novel morphs evolve.
This study advances our understanding of the evolution of the mimicry polymorphism in Papilio butterflies. This is an important contribution to a system already at the forefront of research on the genetic and developmental basis of sex-specific phenotypic morphs, which are common in insects. More generally, the findings of this study have important implications for how we think about the molecular dynamics of adaptation. In particular, I found that finding extensive genetic divergence between the polytes and romulus alleles is striking, and it challenges the way I used to think about the evolution of this and other otherwise conserved developmental genes. I think that this study is also a great resource for teaching evolution. By linking classic population genetic theory to modern genomic methods, while using visually appealing traits (colour patterns), this study provides a simple yet compelling example to bring to a classroom.
In general, I think that the conclusions of the study, in terms of the evolutionary history of the locus, the dominance relationships between P. polytes alleles, and the inference of a selective sweep in spite of contemporary balancing selection, are strongly supported; the data set is impressive and the analyses are all rigorous. I nonetheless think that there are a few ways in which the current presentation of these data could lead to confusion, and should be clarified and potentially also expanded.
(1) The study is presented as addressing a paradox related to the evolution of phenotypic novelty in "highly constrained genetic architectures". If I understand correctly, these constraints are assumed to arise because the dsx inversion acts as a barrier to recombination. I agree that recombination in the mimicry locus is reduced and that recombination can be a source of phenotypic novelty. However, I'm not convinced that the presence of a structural variant necessarily constrains the potential evolution of novel discrete phenotypes. Instead, I'm having a hard time coming up with examples of discrete phenotypic polymorphisms that do not involve structural variants. If there is a paradox here, I think it should be more clearly justified, including an explanation of what a constrained genetic architecture means. I also think that the Discussion would be the place to return to this supposed paradox, and tell us exactly how the observations of exon swaps and the genetic characterization of the different mimicry alleles help resolve it.
(2) While Haldane's sieve is clearly demonstrated in the P. polytes lineage (with cyrus, polytes, and romulus alleles), there is another allele trio (cyrus, polytes, and theseus) for which Haldane's sieve could also be expected. However, the chronological order in which polytes and theseus evolved remains unresolved, precluding a similar investigation of sequential dominance. Likewise, the locus that differentiates polytes from theseus is unknown, so it's not currently feasible to identify a signature of positive selection shared by P. javanus and P. alphenor at this locus. I, therefore, think that it is premature to conclude that the evolution of these mimicry polymorphisms generally follows Haldane's sieve; of two allele trios, only one currently shows the expected pattern.
-
Reviewer #2 (Public review):
Summary:
Deshmukh and colleagues studied the evolution of mimetic morphs in the Papilio polytes species group. They investigate the timing of origin of haplotypes associated with different morphs, their dominance relationships, associations with different isoform expressions, and evidence for selection and recombination in the sequence data. P. polytes is a textbook example of a Batesian mimic, and this study provides important nuanced insights into its evolution, and will therefore be relevant to many evolutionary biologists. I find the results regarding dominance and the sequence of events generally convincing, but I have some concerns about the motivation and interpretation of some other analyses, particularly the tests for selection.
Strengths:
This study uses widespread sampling, large sample sizes from crossing experiments, and a wide range of data sources.
Weaknesses:
(1) Purpose and premise of selective sweep analysis
A major narrative of the paper is that new mimetic alleles have arisen and spread to high frequency, and their dominance over the pre-existing alleles is consistent with Haldane's sieve. It would therefore make sense to test for selective sweep signatures within each morph (and its corresponding dsx haplotype), rather than at the species level. This would allow a test of the prediction that those morphs that arose most recently would have the strongest sweep signatures.
Sweep signatures erode over time - see Figure 2 of Moest et al. 2020 (https://doi.org/10.1371/journal.pbio.3000597), and it is unclear whether we expect the signatures of the original sweeps of these haplotypes to still be detectable at all. Moest et al show that sweep signatures are completely eroded by 1N generations after the event, and probably not detectable much sooner than that, so assuming effective population sizes of these species of a few million, at what time scale can we expect to detect sweeps? If these putative sweeps are in fact more recent than the origin of the different morphs, perhaps they would more likely be associated with the refinement of mimicry, but not necessarily providing evidence for or against a Haldane's sieve process in the origin of the morphs.
(2) Selective sweep methods
A tool called RAiSD was used to detect signatures of selective sweeps, but this manuscript does not describe what signatures this tool considers (reduced diversity, skewed frequency spectrum, increased LD, all of the above?). Given the comment above, would this tool be sensitive to incomplete sweeps that affect only one morph in a species-level dataset? It is also not clear how RAiSD could identify signatures of selective sweeps at individual SNPs (line 206). Sweeps occur over tracts of the genome and it is often difficult to associate a sweep with a single gene.
(3) Episodic diversification
Very little information is provided about the Branch-site Unrestricted Statistical Test for Episodic Diversification (BUSTED) and Mixed Effects Model of Evolution (MEME), and what hypothesis the authors were testing by applying these methods. Although it is not mentioned in the manuscript, a quick search reveals that these are methods to study codon evolution along branches of a phylogeny. Without this information, it is difficult to understand the motivation for this analysis.
(4) GWAS for form romulus
The authors argue that the lack of SNP associations within dsx for form romulus is caused by poor read mapping in the inverted region itself (line 125). If this is true, we would expect strong association in the regions immediately outside the inversion. From Figure S3, there are four discrete peaks of association, and the location of dsx and the inversion are not indicated, so it is difficult to understand the authors' interpretation in light of this figure.
(5) Form theseus
Since there appears to be only one sequence available for form theseus (actually it is said to be "P. javanus f. polytes/theseus"), is it reasonable to conclude that "the dsx coding sequence of f. theseus was identical to that of f. polytes in both P. javanus and P. alphenor" (Line 151)? Looking at the Clarke and Sheppard (1972) paper cited in the statement that "f. polytes and f. theseus show equal dominance" (line 153), it seems to me that their definition of theseus is quite different from that here. Without addressing this discrepancy, the results are difficult to interpret.
-
Author Response:
Reviewer #1 (Public review):
In this study, Deshmukh et al. provide an elegant illustration of Haldane's sieve, the population genetics concept stating that novel advantageous alleles are more likely to fix if dominant because dominant alleles are more readily exposed to selection. To achieve this, the authors rely on a uniquely suited study system, the female-polymorphic butterfly Papilio polytes.
Deshmukh et al. first reconstruct the chronology of allele evolution in the P. polytes species group, clearly establishing the non-mimetic cyrus allele as ancestral, followed by the origin of the mimetic allele polytes/theseus, via a previously characterized inversion of the dsx locus, and most recently, the origin of the romulus allele in the P. polytes lineage, after its split from P. javanus. The authors then examine the two crucial predictions of Haldane's sieve, using the three alleles of P. polytes (cyrus, polytes, and romulus). First, they report with compelling evidence that these alleles are sequentially dominant, or put in other words, novel adaptive alleles either are or quickly become dominant upon their origin. Second, the authors find a robust signature of positive selection at the dsx locus, across all five species that share the polytes allele.
In addition to exquisitely exemplifying Haldane's sieve, this study characterizes the genetic differences (or lack thereof) between mimetic alleles at the dsx locus. Remarkably, the polytes and romulus alleles are profoundly differentiated, despite their short divergence time (< 0.5 my), whereas the polytes and theseus alleles are indistinguishable across both coding and intronic sequences of dsx. Finally, the study reports incidental evidence of exon swaps between the polytes and romulus alleles. These exon swaps caused intermediate colour patterns and suggest that (rare) recombination might be a mechanism by which novel morphs evolve.
This study advances our understanding of the evolution of the mimicry polymorphism in Papilio butterflies. This is an important contribution to a system already at the forefront of research on the genetic and developmental basis of sex-specific phenotypic morphs, which are common in insects. More generally, the findings of this study have important implications for how we think about the molecular dynamics of adaptation. In particular, I found that finding extensive genetic divergence between the polytes and romulus alleles is striking, and it challenges the way I used to think about the evolution of this and other otherwise conserved developmental genes. I think that this study is also a great resource for teaching evolution. By linking classic population genetic theory to modern genomic methods, while using visually appealing traits (colour patterns), this study provides a simple yet compelling example to bring to a classroom.
In general, I think that the conclusions of the study, in terms of the evolutionary history of the locus, the dominance relationships between P. polytes alleles, and the inference of a selective sweep in spite of contemporary balancing selection, are strongly supported; the data set is impressive and the analyses are all rigorous. I nonetheless think that there are a few ways in which the current presentation of these data could lead to confusion, and should be clarified and potentially also expanded.
We thank the reviewer for the kind and encouraging assessment of our work.
(1) The study is presented as addressing a paradox related to the evolution of phenotypic novelty in "highly constrained genetic architectures". If I understand correctly, these constraints are assumed to arise because the dsx inversion acts as a barrier to recombination. I agree that recombination in the mimicry locus is reduced and that recombination can be a source of phenotypic novelty. However, I'm not convinced that the presence of a structural variant necessarily constrains the potential evolution of novel discrete phenotypes. Instead, I'm having a hard time coming up with examples of discrete phenotypic polymorphisms that do not involve structural variants. If there is a paradox here, I think it should be more clearly justified, including an explanation of what a constrained genetic architecture means. I also think that the Discussion would be the place to return to this supposed paradox, and tell us exactly how the observations of exon swaps and the genetic characterization of the different mimicry alleles help resolve it.
The paradox that we refer to here is essentially the contrast of evolving new adaptive traits which are genetically regulated, while maintaining the existing adaptive trait(s) at its fitness peak. While one of the mechanisms to achieve this could be differential structural rearrangement at the chromosomal level, it could arise due to alternative alleles or splice variants of a key gene (caste determination in Cardiocondyla ants), and differential regulation of expression (the spatial regulation of melanization in Nymphalid butterflies by ivory lncRNA). In each of these cases, a new mutation would have to give rise to a new phenotype without diluting the existing adaptive traits when it arises. We focused on structural variants, because that was the case in our study system, however, the point we were making referred to evolution of novel traits in general. We will add a section in the revised discussion to address this.
(2) While Haldane's sieve is clearly demonstrated in the P. polytes lineage (with cyrus, polytes, and romulus alleles), there is another allele trio (cyrus, polytes, and theseus) for which Haldane's sieve could also be expected. However, the chronological order in which polytes and theseus evolved remains unresolved, precluding a similar investigation of sequential dominance. Likewise, the locus that differentiates polytes from theseus is unknown, so it's not currently feasible to identify a signature of positive selection shared by P. javanus and P. alphenor at this locus. I, therefore, think that it is premature to conclude that the evolution of these mimicry polymorphisms generally follows Haldane's sieve; of two allele trios, only one currently shows the expected pattern.
We agree with the reviewer that the genetic basis of f. theseus requires further investigation. f. theseus occupies the same level on the dominance hierarchy of dsx alleles as f. polytes (Clarke and Sheppard, 1972) and the allelic variant of dsx present in both these female forms is identical, so there exists just one trio of alleles of dsx. Based on this evidence, we cannot comment on the origin of forms theseus and polytes. They could have arisen at the same time or sequentially. Since our paper is largely focused on the sequential evolution of dsx alleles through Haldane’s sieve, we have included f. theseus in our conclusions. We think that it fits into the framework of Haldane’s sieve due to its genetic dominance over the non-mimetic female form. However, this aspect needs to be explored further in a more specific study focusing on the characterization, origin, and developmental genetics of f. theseus in the future.
Reviewer #2 (Public review):
Summary:
Deshmukh and colleagues studied the evolution of mimetic morphs in the Papilio polytes species group. They investigate the timing of origin of haplotypes associated with different morphs, their dominance relationships, associations with different isoform expressions, and evidence for selection and recombination in the sequence data. P. polytes is a textbook example of a Batesian mimic, and this study provides important nuanced insights into its evolution, and will therefore be relevant to many evolutionary biologists. I find the results regarding dominance and the sequence of events generally convincing, but I have some concerns about the motivation and interpretation of some other analyses, particularly the tests for selection.
We thank the reviewer for these insightful remarks.
Strengths:
This study uses widespread sampling, large sample sizes from crossing experiments, and a wide range of data sources.
We appreciate this point. This strength has indeed helped us illuminate the evolutionary dynamics of this classic example of balanced polymorphism.
Weaknesses:
(1) Purpose and premise of selective sweep analysis
A major narrative of the paper is that new mimetic alleles have arisen and spread to high frequency, and their dominance over the pre-existing alleles is consistent with Haldane's sieve. It would therefore make sense to test for selective sweep signatures within each morph (and its corresponding dsx haplotype), rather than at the species level. This would allow a test of the prediction that those morphs that arose most recently would have the strongest sweep signatures.
Sweep signatures erode over time - see Figure 2 of Moest et al. 2020 (https://doi.org/10.1371/journal.pbio.3000597), and it is unclear whether we expect the signatures of the original sweeps of these haplotypes to still be detectable at all. Moest et al show that sweep signatures are completely eroded by 1N generations after the event, and probably not detectable much sooner than that, so assuming effective population sizes of these species of a few million, at what time scale can we expect to detect sweeps? If these putative sweeps are in fact more recent than the origin of the different morphs, perhaps they would more likely be associated with the refinement of mimicry, but not necessarily providing evidence for or against a Haldane's sieve process in the origin of the morphs.
Our original plan was to perform signatures of sweeps on individual morphs, but we have very small sample sizes for individual morphs in some species, which made it difficult to perform the analysis. We agree that signatures of selective sweeps cannot give us an estimate of possible timescales of the sweep. They simply indicate that there may have been a sweep in a certain genomic region. Therefore, with just the data from selective sweeps, we cannot determine whether these occurred with refining of mimicry or the mimetic phenotype itself. We have thus made no interpretations regarding time scales or causal events of the sweep. Additionally, we discuss the results we obtained for individual alleles represent what could have occurred at the point of origin of mimetic resemblance or in the course of perfecting the resemblance, although we cannot differentiate between the two at this point (lines 320 to 333).
(2) Selective sweep methods
A tool called RAiSD was used to detect signatures of selective sweeps, but this manuscript does not describe what signatures this tool considers (reduced diversity, skewed frequency spectrum, increased LD, all of the above?). Given the comment above, would this tool be sensitive to incomplete sweeps that affect only one morph in a species-level dataset? It is also not clear how RAiSD could identify signatures of selective sweeps at individual SNPs (line 206). Sweeps occur over tracts of the genome and it is often difficult to associate a sweep with a single gene.
RAiSD (https://www.nature.com/articles/s42003-018-0085-8) detects selective sweeps using the μ statistic, which is a combined score of SFS, LD, and genetic diversity along a chromosome. The tool is quite sensitive and is able to detect soft sweeps. RAiSD can use a VCF variant file comprising of SNP data as input and uses an SNP-driven sliding window approach to scan the genome for signatures of sweep. Using an SNP file instead of runs of sequences prevents repeated calculations in regions that are sparse in variants, thereby optimizing execution time. Due to the nature of the input we used, the μ statistic was also calculated per site. We then tried to annotate the SNPs based on which genes they occur in and found that all species showing mimicry had atleast one site that showed a signature of sweep contained within the dsx locus.
(3) Episodic diversification
Very little information is provided about the Branch-site Unrestricted Statistical Test for Episodic Diversification (BUSTED) and Mixed Effects Model of Evolution (MEME), and what hypothesis the authors were testing by applying these methods. Although it is not mentioned in the manuscript, a quick search reveals that these are methods to study codon evolution along branches of a phylogeny. Without this information, it is difficult to understand the motivation for this analysis.
We thank you for bringing this to our notice, we will add a few lines in the Methods about the hypothesis we were testing and the motivation behind this analysis. We will additionally cite a previous study from our group which used these and other methods to study the molecular evolution of dsx across insect lineages.
(4) GWAS for form romulus
The authors argue that the lack of SNP associations within dsx for form romulus is caused by poor read mapping in the inverted region itself (line 125). If this is true, we would expect strong association in the regions immediately outside the inversion. From Figure S3, there are four discrete peaks of association, and the location of dsx and the inversion are not indicated, so it is difficult to understand the authors' interpretation in light of this figure.
We indeed observe the regions flanking dsx showing the highest association in our GWAS. This is a bit tricky to demonstrate in the figure as the genome is not assembled at the chromosome level. However, the association peaks occur on scf 908437033 at positions 2192979, 1181012 and 1352228 (Fig. S3c, Table S3) while dsx is located between 1938098 and 2045969. We will add the position of dsx in the figure legend of the revised manuscript.
(5) Form theseus
Since there appears to be only one sequence available for form theseus (actually it is said to be "P. javanus f. polytes/theseus"), is it reasonable to conclude that "the dsx coding sequence of f. theseus was identical to that of f. polytes in both P. javanus and P. alphenor" (Line 151)? Looking at the Clarke and Sheppard (1972) paper cited in the statement that "f. polytes and f. theseus show equal dominance" (line 153), it seems to me that their definition of theseus is quite different from that here. Without addressing this discrepancy, the results are difficult to interpret.
Among P. javanus individuals sampled by us, we obtained just one individual with f. theseus and the H P allele, however, in the data we added from a previously published study (Zhang et. al. 2017), we were able to add nine more individuals of this form (Fig. S4b and S7), while we did not show these individuals in Fig 3 (which was based on PCR amplification and sequencing of individual exons od dsx), all the analysis with sequence data was performed on 10 theseus individuals in total. In Zhang et. al. the authors observed what we now know are species specific differences when comparing theseus and polytes dsx alleles and not allele-specific differences. Our observations were consistent with these findings.
-
-
toribix.bergbuilds.domains toribix.bergbuilds.domains
-
This contrasts with today’s relationships which often rely on face-to-face interaction or at least seeing a picture of them before forming a relationship.
I'd like to comment on this. I would disagree with what you said about today's relationships relying on face to face interaction. I think that connections forming through letters is very similar to how connections can form through social media and texting today. However, I would agree with the picture part. It's amazing how the woman was able to fall fully in love with the man without knowing what he looked like. That is something I cannot imagine happening in today's world.
-
-
trailhead.salesforce.com trailhead.salesforce.com
-
DataPacks API
Is this an alternative to the
Metadata API
? -
The automation server uses IDX Build Tool and the SFDX-CLI (Salesforce Command Line Interface) for automated deployment
Why both?
-
-
mlpp.pressbooks.pub mlpp.pressbooks.pub
-
Smith won handily in the nation’s largest cities
Was this because there was a higher worker population? He favored the protection of worker's.
-
Harding took vacation in the summer of 1923, announcing he intended to think deeply about how to deal with his “God-damned friends”.
This is so funny. He knew his friends sucked. Why did he think they would do any differently and not embarrass him?
-
tores and homes were looted and set on fire. When Tulsa firefighters arrived, they were turned away by white vigilantes
This awful. The people in Tulsa were truly thriving and in literally a day, everything was gone.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So focusing specifically on the animals for life scenario.
So what we're going to do in the upcoming demo lesson, to implement a truly resilient architecture for net services in a VPC, you need a net gateway in a public subnet inside each availability zone that the VPC uses.
So just like on the diagram that you've gone through now.
And then as a minimum, you need private route tables in each availability zone.
In this example, AZA, AZB, and then AZC.
Each of these would need to have their own route table, which would have a default IP version for route, which points at the net gateway in the same availability zone.
That way, if any availability zone fails, the others could continue operating without issues.
Now, this is important.
I've seen it in a few of some questions.
Where it suggests that one net gateway is enough, that a net gateway is truly regionally resilient.
This is false.
A net gateway is highly available in the availability zone that it's in.
So if hardware fails or it needs to scale to cope with load, it can do so in that AZ.
But if the whole AZ fails, there is no failover.
You provision a net gateway into a specific availability zone, not the region.
It's not like the internet gateway, which by default is region resilient.
For a net gateway, you have to deploy one into each AZ that you use if you need that region resilience.
Now, my apologies in advance for the small text.
It's far easier to have this all on screen at once.
I mentioned at the start of the lesson that net used to be provided by net instances, and these are just for the net process running on an EC2 instance.
Now, I don't expect this to feature on the exam at this point.
But if you ever need to use a net instance, by default, EC2 filters all traffic that it sends or receives.
It essentially drops any data that is on its network card when that network card is not either the source or the destination.
So if an instance is running as a net instance, then it will be receiving some data which the source address will be of other resources in that VPC.
And the destination will be a host on the internet.
So it will neither be the source nor the destination.
So by default, that traffic will be dropped.
And if you need to allow an EC2 instance to function as a net instance, then you need to disable a feature called source and destination checks.
This can be disabled via the console UI, the CLI, or the API.
The only reason I mention this is I have seen this question in the exam before, and if you do implement this in a real-world production-style scenario, you need to be aware that this feature exists.
I don't want you wasting your time trying to diagnose this feature.
So if you just right-click on an instance in the console, you'll be able to see an option to disable source and destination checks.
And that is required if you want to use an EC2 instance as a net instance.
Now, at the highest level, architecturally, net instances and net dayways are kind of the same.
They both need a public ID address.
They both need to run in a public subnet, and they both need a functional internet gateway.
But at this point, it's not really preferred to use EC2 running as a net instance.
It's much easier to use a net gateway, and it's recommended by AWS in most situations.
But there are a few key scenarios where you might want to consider using an EC2-based net instance.
So let's just step through some of the criteria that you might be looking at when deploying net services.
If you value availability, bandwidth, low levels of maintenance, and high performance, then you should use net gateways.
That goes for both real-world production usage, as well as being default for answering any exam questions.
A net gateway offers high-end performance, its scales, its custom design, perform network address translation.
A net instance in comparison is limited by the capabilities of the instances running on, and that instance is also general purpose, so it won't offer the same level of custom design performance as a net gateway.
Now, availability is another important consideration, and that instance is a single EC2 instance running inside an availability zone.
It will fail if the EC2 hardware fails.
It will fail if its storage fails or if its network fails, and it will fail if the AZ itself fails entirely.
A net gateway has some benefits over a net instance.
So inside one availability zone, it's highly available, so it can automatically recover, it can automatically scale.
So it removes almost all of the risks of outage versus a net instance.
But it will still fail entirely if the AZ fails entirely.
You still need provision, multiple net gateways, spread across all the AZs that you intend to use, if you want to ensure complete availability.
For maximum availability, a net gateway in every AZ you use.
This is critical to remember for the exam.
Now, if cost is your primary choice, if you're a financially challenged business, or if the VPC that you're deploying net services into is just a test VPC or something that's incredibly low volume, then a net instance can be cheaper.
It can also be significantly cheaper at high volumes of data.
You've got a couple of options.
You can use a very small EC2 instance, even ones that are free tier eligible to reduce costs, and the instances can also be fixed in size, meaning they offer predictable costs.
A net gateway will scale automatically, and you'll build for both the net gateway and the amount of data transferred, which increases as the gateway scales.
A net gateway is also not free tier eligible.
Now, this is really important because when we deploy these in the next demo lesson, it's one of those services that I need to warn you will come at a cost, so you need to be aware of that fact.
You will be charged for a net gateway regardless of how small the usage.
Net instances also offer other niche advantages because they're just EC2 instances.
You can connect to them just like you would any other EC2 instance.
You can multi-purpose them so you can use them for other things, such as passing hosts.
You can also use them for port forwarding, so you can have the port on the instance externally that could be connected to over the public internet, and have this forwarded-on for an instance inside the VPC.
Maybe port 8 if a web, or port 443 for secure web.
You can be completely flexible when you use net instances.
With a net gateway, this isn't possible because you don't have access to manage it.
It's a managed service.
Now, this comes up all the time in the exam, so try and get it really clear in your memory, and that gateway cannot be used as a passing host.
It cannot do port forwarding because you cannot connect to its operating system.
Now, finally, this is again one focus on the exam.
Net instances are just EC2 instances, so you can filter traffic using the network ACLs on the subnet instances in, or security groups directly associated with that instance.
Net gateways don't support security groups.
You can only use knuckles with net gateways.
This one comes up all the time in the exam, so it's worth noting down and maybe making a flashcard with.
Now, a few more things before we finish up.
What about IP version 6?
The focus of net is to allow private IP version 4 addresses to be used to connect in an outgoing only way to the AWS public zone and public internet.
Inside AWS, all IP version 6 addresses are publicly routable, so this means that you do not require net when using IP version 6.
The internet gateway works directly with IP version 6 addresses, so if you choose to make an instance in a private subnet, have a default IP version 6 route to the internet gateway, it will become a public instance.
As long as you don't have any knuckles or any security groups, any IP version 6 IP address in AWS can communicate directly with the AWS public zone and the public internet.
So the internet gateway can work directly with IP version 6.
Net gateways do not work with IP version 6, they're not required and they don't function with IP version 6.
So for the exam, if you see any questions which mention IP version 6 and net gateways, you can exclude the answer.
Net gateways do not work with IP version 6 and you can repeat it because I really wanted to stick in your memory.
So with any subnet inside AWS, which has been configured for IP version 6, if you add the IP version 6 default route, which is colon colon 4 slash 0, if you add that route and you point that route at the internet gateway as a target, that will give that instance bi-directional connectivity to the public internet and it will allow it to reach the AWS public zone and public services.
One service that we'll be talking about later on in the course when I cover more advanced features of VPC is a different type of gateway, known as an egress-only internet gateway.
This is a specific type of internet gateway that works only with IP version 6 and you use it when you want to give an IP version 6 instance outgoing only access to the public internet and the AWS public zone.
So don't worry, we'll be covering that later in the course, but I want to get it really burned into your memory that you do not use net and you do not use net gateways with IP version 6.
It will not work.
Now to get you some experience of using net gateways, it's time for a demo.
In the demo lesson, I'm going to be stepping you through what you need to do to provision a completely resilient net gateway architecture.
So that's using net gateway in each availability zone as well as configuring the routing required to make it work.
It's going to be one of the final pieces to our multi-tier VPC and it will allow private instances to have full outgoing internet access.
Now I can't wait for us to complete this together.
It's going to be a really interesting demo, one that will be really useful if you're doing this in the real world or if you have to answer exam questions related to net or net gateway.
So go ahead, complete the video and when you're ready, join me in the demo.
-
-
docdrop.org docdrop.orgview6
-
We should incorporate into our teaching the assets low-income students bring to school. If poor students' resilience, flexibility, and persistence toward a goal is affirmed and integrated into the school culture, students would not drop out at the rate they do
This recommendation emphasizes the importance of a more inclusive and asset-based approach to the educational process. By recognizing the unique strengths of low-income students as part of a school's culture, educational institutions can not only help these students overcome educational challenges, but also build a more supportive and diverse learning environment for all students. This approach not only helps to reduce dropout rates, but also fosters the holistic development of all students.
-
Teachers can play a major role in helping students feel engaged and con-nected to their learning communities. First, we need to make the invisible visi-ble-to unveil the hidden curriculum. And more important, we need to encourage students and colleagues to question the legitimacy of the hidden curriculum itself. I was a student who would have benefited from strong academic mentoring. I did not know what I did not know. I was subject to an establishment that did not value what I did know: my resiliency, my outspokenness, and my other strengths.
The hidden curriculum may include biases or assumptions that may disadvantage certain groups of students. By encouraging students and colleagues to question the legitimacy of these hidden curricula, a more open and inclusive learning environment can be fostered to ensure that all students' voices are heard and their needs are met.
-
Although I socialized with both Black and White students, I self-identified as "Black." After the name-calling, and after I realized the students who were not compliant and submissive were the ones who were ridiculed, I questioned my friendships with White students.
This passage emphasizes the importance of addressing race and social class in educational settings. Educational institutions need to recognize and address these systemic biases to ensure an equitable and inclusive learning environment for all students. Measures such as increasing diversity training for teachers and administrators and advocating for inclusive policies and practices can help break down such biases and promote a more equitable educational environment.
-
ules. In this way I was raised to be compliant, one element of the hidden curriculum in our schools. This insistence on compliance is also one aspect of schooling that keeps some students from feeling they can challenge the very structures that repress them. They often feel silenced and alienated from public education at an early age. In my household, we did not have many books. I believe my lack of books contributed to my below average reading test scores. In third grade I was read-ing at a second-grade level. Research indicates that social class can influence cognitive abilities because a lack of money results in fewer experiences at muse-ums and traveling, fewer books in the home, and less access to preschool educa-tion (Bowles & Gintis, 2002; Good & Brophy, 1987).
The hidden, conformist curriculum does, in fact, discourage the freedom of students to question and resist repressive systems, creating alienation. Students may feel that their opinions or views aren’t heard, and then leave their education to a silent partner. Moreover, the absence of material like books can also affect academic skills. Inadequate access to home-schooling materials can seriously impact literacy and learning outcomes.
-
The same unease students feel with their more affluent peers can transfer over to their professors. They may not reach out to their professors when they are performing poorly in the class, fearing that they will be judged as lacking in the ability to succeed in schoo
Students of lower economic status may believe that their performance will be viewed as less than competent, and thus be reluctant to communicate with their professors. This communication barrier can prevent them from obtaining the necessary support and guidance, further exacerbating academic performance problems. Such psychological barriers can affect students' long-term educational and career paths
-
students rarely out themselves as being poor. You could not tell they struggle financially by the papers they turn in to me or by what they say when we discuss things in my sociology classes at the University of St. Thomas. During office hours, however, students reveal to me that they grew up poor, and often they tell me that they are the first person from their family to go to college. They talk about the social distance they feel from their peers who have money. They tell me t
Economic inequality in scholarly settings can erect a social distance that’s seldom perceived by those who don’t feel it. Schools have a dark curriculum, which is also outside the classroom and forming social norms and reifying class differences. Inclusion: for low-income students, not being afforded materials or experiences comparable to their wealthier peers creates isolation and reinforces alienation. These students usually create social networks of peers from the same income bracket, which is useful for them coping with such problems but could reduce their opportunities to be able to socialize with more diverse peer groups.
-
-
www.sjsu.edu www.sjsu.edu
-
most clocks were used for astronomical and astrological purposes rather than for telling the time of day
Was this in order to see the time of day rather than the actual time?
-
en seeking knowledge would travel to Spain to obtain Muslim science
Is this due to the fact that the highest form of religous activity is knowledge?
-
earning and gaining knowledge is the highest form of religious activity for Muslims
Why is this the highest form of religous activity compared to others?
-
-
docdrop.org docdrop.org
-
There’s nothing we can’t make into a story. There’s not anything that isn’t already one.
I like the final statement, and I agree that everything can be made into a story if done right and has the potential to be part of a meaningful narrative.
-
Digression, I’ve always thought, gets a bad rap. The word itself implies that there’s a proper gress from which one has strayed, that every life is a line. But surely linearity is something we impose only afterward, when it’s time to make a narrative, when it’s time to comb out our gresses and untangle them into something we can call progress or congress
This is in contrast to the idea that we should think linearly and focus on one thing. Instead, we should allow ourselves to wander in thought and action, as it can lead to unexpected insights, ideas, and understanding.
-
-
docdrop.org docdrop.orgview4
-
Without an adult to encourage her to cake algebra, the gateway to college preparatory math and science courses, or to advise her on where she might seek academic support, Chantelle made a decision that is likely to affect her preparation for college and therefore will have bearing in the long term on her opportunities after high school. By taking prealgebra in the ninth grade, Chantelle is all hut ensured that she will be unable to meet the admissions requirements to the UC or California State University (CSU) systems. Given that so much is at stake, it must be recognized that a system of course assignment that allows students to choose which classes to take will invariably work better for some than others. Jennifer's words are equally revealing. Like many of Berkeley High's more affluent, white ninth graders, she did not attend Berke-ley's public school system. In fact, according to school records, some 12 percent of Berkeley High School's class of 2000 attended private
Chantelle's situation is representative of many students who face similar challenges, and it will take a concerted effort on the part of education policymakers, school administrators, and teachers to ensure that every student is able to make the most favorable decisions academically by providing additional resources and support. This includes strengthening career guidance services and implementing more comprehensive academic support systems in schools.
-
Research has shown that economic capital, that is, the w~alch and income of parents, is one of the primary factors influ-ep.cing student achieveme11t (Coleman and others, 1966; Roth-stein, 2004; Farkas, 2004 ). Student achievement is also influenced _l,y more subtle resources sud; as social capital-the benefits derived from c<;mnections to networks and individuals with power and influence (Coleman, 1988; Stanton-Salazar, 1997, 2001; Noguera, 2003 )-and cultural capital (Bourdieu and Wacquant, 1992)-the t~sces, styles, habits, language, behaviors, appearance, and customs c.hat serve as indicators of status anJ privilege.
This emphasizes the significant role of economic capital for student achievement. Economic capital provides students with good education, opportunities for extracurricular activities and security that could be applied towards improving their learning. Social capital comes into play, too because connecting with influencers or belonging to networks can enable you to get access to opportunities that one may not have had access to otherwise. Involvement can translate into mentorship, mentorship, knowledge and these things can motivate students.
-
As the comments from these two student show, some tudent have more information and a clearer sense of how .Lhe school wurks (such as the classes they need to take) than others. In addition, more affluent students like Jennifer can rely on _the resources of their parents ( private tutors and counselor , the
The “wealthier students” mentioned in the article, such as Jennifer, were able to rely on additional resources provided by her parents, such as private tutors and counselors, which helped her better understand and navigate the school system. This reveals how wealth translates into an educational advantage, providing children from affluent families with additional support and opportunities that are not available to other, less well-off students.
-
BHS). Our exami-nation of school structures also includes a focus on the organization of the school-the decentralized nature of decision making within departments, the distribution of authority and responsibility among administrators, the accountability (or lack thereof) anJ funcrion of special programs (such ::ts English as a Second Language, Ach-anced Placement, and Special Education). We examine how these struc-tures shape and influence the acad
Teacher content, course content, allocation of resources are some of the most important things that affect student learning. These structures shape the opportunities given to students and help or hinder their success. Discrepancy in decisions, in the division of authority and responsibility, have a direct impact on the capacity of a school to cater for its children. As decision-making doesn’t fit well together or accountability is inadequate, program and policy implementation will be skewed and might perpetuate inequities unnecessarily.
-
-
trailhead.salesforce.com trailhead.salesforce.com
-
The OmniStudio Tracking Service is an event-tracking service that captures details of actions that users perform
All actions? Does it require configuration to be useful?
-
-
docdrop.org docdrop.orgview1
-
Harold's mother is as passionate as Garrett's parents about provid-ing what it takes for her children to be successful and happy, but she sees her role as providing food, "clothing and shelter, teaching the difference between right and wrong, and providing comfort."8
Harold's mother was passionate and committed to the care of her children and did her best to provide for their basic needs and education, but was more strapped for material resources than Garrett's family. This suggests that although parents share the same desire to care for and educate their children, differences in economic conditions make their roles and the support they may provide different.
-