- Oct 2024
-
www.americanyawp.com www.americanyawp.com
-
Plan Espiritual de Aztlán, a Chicano nationalist manifesto that reflected Gonzales’s vision of Chicanos as a unified, historically grounded, all-encompassing group fighting against discrimination in the United States.
Fighting Mexican discrimination
-
March on Washington. The march called for, among other things, civil rights legislation, school integration, an end to discrimination by public and private employers, job training for the unemployed, and a raise in the minimum wage.
March for national action instead of the slow-moving state governments which prolonged segregation.
-
President Lyndon Johnson
signed Civil Rights Act
-
Medgar Evers was assassinated at his home in Jackson, Mississippi.
Murdered civil rights leader
-
the Albany Movement,
New York civil rights movement
-
The Albany Movement included elements of a Christian commitment to social justice in its platform, with activists stating that all people were “of equal worth” in God’s family and that “no man may discriminate against or exploit another.”
A brave movement in such a racist city.
-
-
furnaceandfugue.org furnaceandfugue.org
-
A WolfeWolf coming from the East, and a DoggeDog from the West werry'dwerried one another.
This artists has never seen a wolf before lmao
-
, because fires meeting together doedo one destroy the other.
guess you CAN fight fire with fire
-
But the wolfewolf recovering strength afterwards overthrowesoverthrows the doggedog, and be=ing cast downedown never leaves him till heehe be utterly killdkilled and dead; - In the meanemean time receiving from the doggedog noeno lesseless wounds nor lesseless mortallmortal, than heehe gave him, till they werry one another to death:
Dog is domesticated/merciful but has breeding that makes it superior to the wolf in some way, both strengths and weaknesses lead to deaths of both. Likewise acidic concoctions are usually destroyed or transformed by alkaline mixtures, but itself transforms the other
-
AvicenneAvicenna saythsays they lyelie in - dung neglected and rejected by the vulgar, which, if they be joyndjoined together, are able to complete the Magistery
Arabic writings finally reincorporated into European knowledge and discourse by 1618 after humanist rejection
-
-
www.americanyawp.com www.americanyawp.com
-
offered low-interest home loans, a stipend to attend college, loans to start a business, and unemployment benefits.
Helped military "servicemen"
-
Federal Housing Administration (FHA),
Mortgage insurance and protection
-
Home Owners’ Loan Corporation (HOLC)
refinanced mortgages so that people could have more time to pay their loans
-
with all deliberate speed” was so vague and ineffectual that it left the actual business of desegregation in the hands of those who opposed it.
Brown tried to desegregate schools but this phrase was almost fatal to the attempt because some state's "deliberate speed" was very slow aka never.
-
Levittown, the prototypical suburban community, in 1946 in Long Island, New York. Purchasing large acreage, subdividing lots, and contracting crews to build countless homes at economies of scale, Levitt offered affordable suburban housing to veterans and their families
Levitt invested in suburban developmen for affordable housing
-
Sarah Keys v. Carolina Coach Company, in which the Interstate Commerce Commission ruled that “separate but equal” violated the Interstate Commerce Clause of the U.S. Constitution.
desegregation of interstate travel
-
Shelley v. Kraemer, declared racially restrictive neighborhood housing covenants—property deed restrictions barring sales to racial minorities—legally unenforceable.
Supreme Court outlawed discrimination against Black people in house sales.
-
n Shelley v. Kraemer
Supreme Court ruled to eliminate racist housing restrictions
-
-
dataxdesign.io dataxdesign.io
-
His scholarly expertise and lived experience together pointed to the fact that, on its own, data visualization could not hope to convey a complete picture of the progress of Black Americans to date, nor could it convey the extent of the obstacles that were required to be overcome.
I do think these limits make sense but if there are more specific examples listed here I would be clearer. Data analysis and visualization are meant to narrate latent info and give out a more general description of numbers, but here the author mentioned the "limit". How do we consider the limit?
-
-
furnaceandfugue.org furnaceandfugue.org
-
But - saythsays Count Bernhard in his Epistle, I tell you truelytruly, that noeno water dissolves a metallickemetallic species by naturallnatural reduction, except that which continues with - it in matter and formeform, and which the metallsmetals themselves can recongealerecongeal:
Beginning of understanding stable vs unstable elements, based on electrons later on. At least there's some Chymystry here
-
if it be not suppositious
Oh yeah, this book is very good at not supposing things. Glad to see the extreme dedication and care made to fact checks
-
What wonder therefore, if the Philosophers would have their dragon shuttshut up in a cavernecavern with a - woman?
As odd as this all is, it does capture the Aristotelean idea of observing nature in its "natural state," i.e. Don't put a cat into water to see how it will act, same with Dragons and women I suppose?
-
then will Pluto blow a - blast, and draw a volatile fiery spirit out of the cold dragon, which with its great heat will burneburn the Eagles feathers, and excite such a sudorifickesudorific bath, as to melt the Snow at the top of the mountains, - and turn it into water;
Let him cook?
-
-
docdrop.org docdrop.orgview1
-
In the past, most states simply provided a flat grant to school districts based on the number of students in the district. Each student received an equal amount of funding, which obviously did nothing to offset the inequalities in local funding
Giving an equal amount of funding to each student is an example of equality but schools should instead meet needs based on equity. Instead of treating everyone the same and giving the same amount of resources and funding for each student, giving based on the individual needs of students would instead help a lot more. Some students may live in poverty, while others may not. Some students may have disabilities that require more resources.
-
-
canvas.tufts.edu canvas.tufts.eduFiles3
-
HIJOS trying to combat the attrocities.
-
Military Dictatorships are detrimental in the community.
-
Preparing the community for crimes and trauma.
-
-
bongolearn.zendesk.com bongolearn.zendesk.com
-
If you are manually assigning learners to groups, each learner must open a Bongo page while logged into their individual account before you assign them into groups. Learners who have not yet accessed Bongo will not appear in the lists described below for group assignment. Once students see the Bongo interface (e.g. their activity or a list of activities), then no further action is required on their part; they are now ready for group assignment.
Doh!
-
-
www.opb.org www.opb.org
-
In a press release distributed Saturday afternoon, Portland police said its officers did not intervene to stop the fighting because those involved “willingly” engaged, its forces were stretched too thin from policing 80+ nights of protests, and the bureau didn’t feel the clashes would last that long.
beginning of break down
-
“Anyone who is involved in criminal behavior is subject to arrest and/or citation. Criminal conduct may also subject you to the use of force, including, but not limited to, crowd control agents and impact weapons. Stop participating in criminal behavior,” Portland police officials tweeted.
law intervention
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
As you can see in the apple example, any time we turn something into data, we are making a simplification.1 If we are counting the number of something, like apples, we are deciding that each one is equivalent. If we are writing down what someone said, we are losing their tone of voice, accent, etc. If we are taking a photograph, it is only from one perspective, etc. Different simplifications are useful for different tasks. Any given simplification will be helpful for some tasks and be unhelpful for others. See also, this saying in statistics: All models are wrong, but some are useful
The article's apple example ignores the variations in each apple, such as size, color, and quality, by simply counting the quantity of apples. Similar to this, when you record a conversation, the emotional details like tone and intonation are lost even though the text material is recorded. Moreover, taking a picture can only depict a portion of the scene; it cannot depict the entire scene. Every simplification technique has its limitations, but the effectiveness of each technique is determined by how well it can deliver relevant information for a given task in a given situation.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What actions would you want one or two steps away?
I think that any sort of contribution that users make to the platform that is visible to other users should be at least one or two steps away. This is important because it is a HUGE deterrent for bots and malicious content if their actions are not immediately reflected on the platform and this would reduce TOS-breaking activity on the platform.
-
What actions would you not allow users to do
I would never allow users to alter someone else's profile including removing any of their followers or posts since that would defeat the point of a social media. Additionally I think that not even the company that runs a social media should be able to do that since it restricts the freedom of the users of the platform.
-
-
doc-04-1g-prod-01-apps-viewer.googleusercontent.com doc-04-1g-prod-01-apps-viewer.googleusercontent.com
-
"correct" or a metaphysically "true" meaning.
not one "true" concrete meaning, just the theoretically formulated one or the ones imbued by the actors
-
Hence, the definition does not specify whetherthe relation of the actors is co-operative or the opposite
oriented towards others- doesn't need to be co-operative
-
t would be very unusual to find concrete cases of action, espe-cially of social action, which were oriented only in one or another ofthese ways. Furthennore. this classification of the modes of orientationof action is in no sense meant to exhaust the possibilities of the field,but. only to fonnulate in conceptually pure fonn certain sociologicallyimportant types to which actual action is more or less closely approxi-mated or, in much the more common case, which constitute it; de"ments.
most cases not concrete- combination of above orientatiosn
-
him
"irrational" value driven values that are not in the individuals best interest
-
clearly self-conscious fonnulation, of the ultimatevalues governing the action and the consistently planned orientation ofits detailed course to these values
pre-plannedness and consciousness of value-rational action distinguishes from actual
-
these expectations are used as "conditions" or"means" for the attainment of the actor's own rationally pursued andcalculated ends
first type of social action- expectations of behavior from environment and other people- rational for what someone wants
-
But conceptually it is essential todistinguish them, even though merely reactive imitation may well havea degree of sociological importance at least equal to that of the typewhich can be called social action in the strict sense.
need to distinguish meaningful orientation form influences even though its hard to figure out what is the true social action
-
both the orientation tothe behavior of others and the meaning which can be imputed by theactor himself, are by no means always capable of clear determination andare often altogether unconscious and seldom fully self-conscious.
who its for and how the actor articulates why they do something is no wholly conscious, often isn't.
-
both
if individual is replicating action for the purpose of social orientation (fashion trends for status) it is meaningful social action.
-
found to employ some apparently useful procedurewhich he learn\.:d from someone else does not, however, constitute, in thepresent sense, social action. Action such as this is not oriented to theaction of the other person, but the actor has, through observing theother, become acquainted with certain objective facts; and it is these towhich his action is oriented
copying of others behavior as useful means to an end isn't inherently social
-
n such cases as that of the influence of the demagogue,there may be a wide variation in the extent to which his mass clientele isaffected by a meaningful reaction to the fact of its large numbers; andwhatever this relation may be, it is open to varying interpretations
actions within crows not considered at a high level of meaning but if it does have implications there are many possible interpretations.
-
Others become more difficult under these conditions. Hence it ispossible that a particular event or mode of human behavior can give riseto the most diverse kinds of feeling-gaiety, anger, enthusiasm, despair,and passions of all sorts-in a crowd situation which would not occur atall or not nearly so readily if the individual were alone.
sometimes people experience something that can only be experienced in a crowd- cannot achieve similar things
-
action conditioned by crowd
actions conditioned by crowds is "crowd psychology" differs from the case of many people doing the same thing because they are being influenced by the same thing
-
The economic activity of an individualis social only if it takes account of the behavior of someone else. Thusvery generally it becomes social insofar as the actor assumes that otherswill respect his actual control over economic gocxls.
well isn't everything social economically then? DING DING DING DURKHEIM
-
which includes both failure to act and passiveacquiescence, may, be oriented to the past, present, or expected futurebehavior of others
cool
-
But the difficulty need not prevent the sociologist from systematizing hisconcepts by the classification of possible types of subjective meaning.That is, he may reason as if action actually proceeded on the basis ofclearly self-conscious meaning. The resulting deviation from the concretefacts must continually be kept in mind whenever it, is a question of thislevel of concreteness, and must be carefully studied with reference bothto degree and kind
peoples of lack of consciousness of their meaning doesn't mean it should be taken less seriously as motive.
-
The theoreticaloconcepts of sociology are ideal types not only from theobjective point of view, but also in their application to subjective proc-esses. In the great majority of cases actual action goes on in a state of in-articulate half.consciousness or actual unconsciousness of its subjectivemeaning. The actor is more likely to "be aware" of it in a vague sense thanhe is to "know" what he is doing or he explicitly self-conscious about it.In most cases his action is governed by impulse or habit.
theoretical concepts also theoretical in that the actor "know" why they do something
-
First, in analysing the extent to which in theconcrete case, or on the average for a class of cases, the action was inpart economically detennined along with the other factors. Secondly, hythrowing the discrepancy between the actual course of events and theideal type into relief, the analysis of the non-eeonomic motives actuallyinvolved is facilitated.
use of idealized "averages" used to identify and measure impact of varying factors
-
r sociology the motives which detennine it are qualitatively heterogene-ous.
everything has qualitatively different aspects in sociology and history- hard to find "average"
-
But when reference is made to "typical" cases, the tenn shouldalways be UDderstood, unless. ochawise stated, as meaning ideo! types,which may in turiibe: rational or imltional as the case may he (thusin economic theory &hey are always rational), hut in any case are alwaysconstrocted with 19iew to adequacy on the level of meaning
Ideal types- theoretically "pure" situations, are used to conceptualize certain concepts and identify similar instances. They have completely logical causal explanations and adequate levels of meaning but are likely hypothetical. This doesn't stop them from being very helpful
-
But sociological investigation attempts to include inits scope wrious irrational phenomena, such as prophetic, mystic, andaffectual modes of action, formulated in terms of theoretical conceptswhich are adequate on the level of meaning. In all cases, rational Or.irrational, sociological analysis both abstracts from reality and at thesame time helps us to understand it, in that it shows With what degree ofapproximation a concrete historical phenomenon can be subsumed underone or more of these concepts
sociological investigation attempts to include all types of pheomena
-
Similarly the rational deliberation of an actor as towhether the remlts of a given proposed course of action will or will notpromot~ certain spednc interests, and the corresponding decision, donot become one bit more understandable by taking "psychological" con-siderations into account. But it is precisely on the basis of such rationalassumptions that most of the hws .of SOCiology, including those of eco-nomics, arc built up.
not everything non-physical or non-mathematical is "psychic"
-
Whatmotives determine and lead the individual members and participants inthis socialistic communi+y to behave in such a way that the communitycame into being in the first place and that it continues to exist? Any'form of functional analysis which proceeds from the whole to the partscan accomplish only a preliminary preparation for this investigation-a preparation, the utility and indispensability of which, if properly car-ried out, is naturaJIy beyond question
individual as unit of analysis from which we start the empirical investigation- considering the whole is just a starting point- Not necessarily non-Durkheimien?
-
We can accomplish something which is never attain-able in the natural sciences, namely the subjective understanding of theaction of the component individuals. The natural sciences on the otherhand cannot do this, being limited to the formulation of causal uni-formities in objects and events and the explanation of individual factsby applying them. We do not "understand" the behavior of cells, but canonly observe the relevant functional relationships and generalize on thebasis of these observations.
As sociologists- we're obligated to go beyond observation based, functional understandings to understandings of why given our access to individual reasoning
-
For purposes of sociological analysis two things can be said. Firstthis functional frame of reference is convenient for purposes of practicalillustration and for provisional orientation. In these respects. it is notonly useful but indispensable. But at the same time if its cognitive valueis overestimated and its concepts illegitimately "reified,"H it can be highly,dangerous. Secondly, in certain circumstanCes this is the only availableway of determining just what processes of social action it is important tounderstand in order to explain a given phenomenon
Looking at the whole practical to illustrate social action and sometimes the only way of looking at social action
Annotators
URL
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
Over the next few lessons and the wider course, we'll be covering storage a lot.
And the exam expects you to know the appropriate type of storage to pick for a given situation.
So before we move on to the AWS specific storage lessons, I wanted to quickly do a refresher.
So let's get started.
Let's start by covering some key storage terms.
First is direct attached or local attached storage.
This is storage, so physical disks, which are connected directly to a device, so a laptop or a server.
In the context of EC2, this storage is directly connected to the EC2 hosts and it's called the instance store.
Directly attached storage is generally super fast because it's directly attached to the hardware, but it suffers from a number of problems.
If the disk fails, the storage can be lost.
If the hardware fails, the storage can be lost.
If an EC2 instance moves between hosts, the storage can be lost.
The alternative is network attached storage, which is where volumes are created and attached to a device over the network.
In on-premises environments, this uses protocols such as iSCSI or Fiber Channel.
In AWS, it uses a product called Elastic Blockstore known as EBS.
Network storage is generally highly resilient and is separate from the instance hardware, so the storage can survive issues which impact the EC2 host.
The next term is ephemeral storage and this is just temporary storage, storage which doesn't exist long-term, storage that you can't rely on to be persistent.
And persistent storage is the next point, storage which exists as its own thing.
It lives on past the lifetime of the device that it's attached to, in this case, EC2 instances.
So an example of ephemeral storage, so temporary storage, is the instance store, so the physical storage that's attached to an EC2 host.
This is ephemeral storage.
You can't rely on it, it's not persistent.
An example of persistent storage in AWS is the network attached storage delivered by EBS.
Remember that, it's important for the exam.
You will get questions testing your knowledge of which types of storage are ephemeral and persistent.
Okay, next I want to quickly step through the three main categories of storage available within AWS.
The category of storage defines how the storage is presented either to you or to a server and also what it can be used for.
Now the first type is block storage.
With block storage, you create a volume, for example, inside EBS and the red object on the right is a volume of block storage and a volume of block storage has a number of addressable blocks, the cubes with the hash symbol.
It could be a small number of blocks or a huge number, that depends on the size of the volume, but there's no structure beyond that.
Block storage is just a collection of addressable blocks presented either logically as a volume or as a blank physical hard drive.
Generally when you present a unit of block storage to a server, so a physical disk or a volume, on top of this, the operating system creates a file system.
So it takes the raw block storage, it creates a file system on top of this, for example, NTFS or EXT3 or many other different types of file systems and then it mounts that, either as a C drive in Windows operating systems or the root volume in Linux.
Now block storage comes in the form of spinning hard disks or SSDs, so physical media that's block storage or delivered as a logical volume, which is itself backed by different types of physical storage, so hard disks or SSDs.
In the physical world, network attached storage systems or storage area network systems provide block storage over the network and a simple hard disk in a server is an example of physical block storage.
The key thing is that block storage has no inbuilt structure, it's just a collection of uniquely addressable blocks.
It's up to the operating system to create a file system and then to mount that file system and that can be used by the operating system.
So with block storage in AWS, you can mount a block storage volume, so you can mount an EBS volume and you can also boot off an EBS volume.
So most EC2 instances use an EBS volume as their boot volume and that's what stores the operating system, and that's what's used to boot the instance and start up that operating system.
Now next up, we've got file storage and file storage in the on-premises world is provided by a file server.
It's provided as a ready-made file system with a structure that's already there.
So you can take a file system, you can browse to it, you can create folders and you can store files on there.
You access the files by knowing the folder structure, so traversing that structure, locating the file and requesting that file.
You cannot boot from file storage because the operating system doesn't have low-level access to the storage.
Instead of accessing tiny blocks and being able to create your own file system as the OS wants to, with file storage, you're given access to a file system normally over the network by another product.
So file storage in some cases can be mounted, but it cannot be used for booting.
So inside AWS, there are a number of file storage or file system-style products.
And in a lot of cases, these can be mounted into the file system of an operating system, but they can't be used to boot.
Now lastly, we have object storage and this is a very abstract system where you just store objects.
There is no structure, it's just a flat collection of objects.
And an object can be anything, it can have attached metadata, but to retrieve an object, you generally provide a key and in return for providing the key and requesting to get that object, you're provided with that object's value, which is the data back in return.
And objects can be anything, there can be binary data, they can be images, they can be movies, they can be cat pictures, like the one in the middle here that we've got of whiskers.
If they can be any data really that's stored inside an object.
The key thing about object storage though is it is just flat storage.
It's flat, it doesn't have a structure.
You just have a container.
In AWS's case, it's S3 and inside that S3 bucket, you have objects.
But the benefits of object storage is that it's super scalable.
It can be accessed by thousands or millions of people simultaneously, but it's generally not mountable inside a file system and it's definitely not bootable.
So that's really important, you understand the differences between these three main types of storage.
So generally in the on-premises world and in AWS, if you want to utilize storage to boot from, it will be block storage.
If you want to utilize high performance storage inside an operating system, it will also be block storage.
If you want to share a file system across multiple different servers or clients or have them accessed by different services, that can often be file storage.
If you want large access to read and write object data at scale.
So if you're making a web scale application, you're storing the biggest collection of cat pictures in the world, that is ideal for object storage because it is almost infinitely scalable.
Now let's talk about storage performance.
There are three terms which you'll see when anyone's referring to storage performance.
There's the IO or block size, the input output operations per second, pronounced IOPS, and then the throughput.
So the amount of data that can be transferred in a given second, generally expressed in megabytes per second.
Now these things cannot exist in isolation.
You can think of IOPS as the speed at which the engine of a race car runs at, the revolutions per second.
You can think of the IO or block size as the size of the wheels of the race car.
And then you can think of the throughput as the end speed of the race car.
So the engine of a race car spins at a certain revolutions, whether you've got some transmission that affect that slightly, but that transmission, that power is delivered to the wheels and based on their size, that causes you to go at a certain speed.
In theory in isolation, if you increase the size of the wheels or increase the revolutions of the engine, you would go faster.
For storage and the analogy I just provided, they're all related to each other.
The possible throughput a storage system can achieve is the IO or the block size multiplied by the IOPS.
As we talk about these three performance aspects, keep in mind that a physical storage device, a hard disk or an SSD, isn't the only thing involved in that chain of storage.
When you're reading or writing data, it starts with the application, then the operating system, then the storage subsystem, then the transport mechanism to get the data to the disk, the network or the local storage bus, such as SATA, and then the storage interface on the drive, the drive itself and the technology that the drive uses.
There are all components of that chain.
Any point in that chain can be a limiting factor and it's the lowest common denominator of that entire chain that controls the final performance.
Now IO or block size is the size of the blocks of data that you're writing to disk.
It's expressed in kilobytes or megabytes and it can range from pretty small sizes to pretty large sizes.
An application can choose to write or read data of any size and it will either take the block size as a minimum or that data can be split up over multiple blocks as it's written to disk.
If your storage block size is 16 kilobytes and you write 64 kilobytes of data, it will use four blocks.
Now IOPS measures the number of IO operations the storage system can support in a second.
So how many reads or writes that a disk or a storage system can accommodate in a second?
Using the car analogy, it's the revolutions per second that the engine can generate given its default wheel size.
Now certain media types are better at delivering high IOPS versus other media types and certain media types are better at delivering high throughput versus other media types.
If you use network storage versus local storage, the network can also impact how many IOPS can be delivered.
Higher latency between a device that uses network storage and the storage itself can massively impact how many operations you can do in a given second.
Now throughput is the rate of data a storage system can store on a particular piece of storage, either a physical disk or a volume.
Generally this is expressed in megabytes per second and it's related to the IO block size and the IOPS but it could have a limit of its own.
If you have a storage system which can store data using 16 kilobyte block sizes and if it can deliver 100 IOPS at that block size, then it can deliver a throughput of 1.6 megabytes per second.
If your application only stores data in four kilobyte chunks and the 100 IOPS is a maximum, then that means you can only achieve 400 kilobytes a second of throughput.
Achieving the maximum throughput relies on you using the right block size for that storage vendor and then maximizing the number of IOPS that you pump into that storage system.
So all of these things are related.
If you want to maximize your throughput, you need to use the right block size and then maximize the IOPS.
And if either of these three are limited, it can impact the other two.
With the example on screen, if you were to change the 16 kilobyte block size to one meg, it might seem logical that you can now achieve 100 megabytes per second.
So one megabyte times 100 IOPS in a second, 100 megabytes a second, but that's not always how it works.
A system might have a throughput cap, for example, or as you increase the block size, the IOPS that you can achieve might decrease.
As we talk about the different AWS types of storage, you'll become much more familiar with all of these different values and how they relate to each other.
So you'll start to understand the maximum IOPS and the maximum throughput levels that different types of storage in AWS can deliver.
And you might face exam questions where you need to answer what type of storage you will pick for a given level of performance demands.
So it's really important as we go through the next few lessons that you pay attention to these key levels that I'll highlight.
It might be, for example, that a certain type of storage can only achieve 1000 IOPS or 64000 IOPS.
Or it might be that certain types of storage cap at certain levels of throughput.
And you need to know those values for the exam so that you can know when to use a certain type of storage.
Now, this is a lot of theory and I'm talking in the abstract and I'm mindful that I don't want to make this boring and it probably won't sink in and you won't start to understand it until we focus on some AWS specifics.
So I am going to end this lesson here.
I wanted to give you the foundational understanding, but over the next few lessons, you'll start to be exposed to the different types of storage available in AWS.
And you will start to paint a picture of when to pick particular types of storage versus others.
So with that being said, that's everything I wanted to cover.
I know this has been abstract, but it will be useful if you do the rest of the lessons in this section.
I promise you this is going to be really valuable for the exam.
So thanks for watching.
Go ahead and complete the video.
When you're ready, you can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this brief demo lesson I want to give you some experience of working with both EC2 instance connect as well as connecting with a local SSH client.
Now these are both methods which are used for connecting to EC2 instances both with public IP version 4 addressing and IP version 6 addressing.
Now to get started we're going to need some infrastructure so make sure that you're logged in as the IAM admin user into the general AWS account which is the management account of the organization and as always you'll need the northern Virginia region selected.
Now in this demonstration you are going to be connecting to an EC2 instance using both instance connect and a local SSH client and to use a local SSH client you need a key pair.
So to create that let's move across to the EC2 console, scroll down on the left and select key pairs.
Now you might already have key pairs created from earlier in the course.
If you have one created which is called A4L which stands for Animals for Life then that's fine.
If you don't we're going to go ahead and create that one.
So click on create key pair and then under name we're going to use A4L.
Now if you're using Windows 10 or Mac OS or Linux then you can select the PEM file format.
If you're using Windows 8 or prior then you might need to use the putty application and to do that you need to select PPK.
But for this demonstration I'm going to assume that you're using the PEM format.
So again this is valid on Linux, Mac OS or any recent versions of Microsoft Windows.
So select PEM and then click on create key pair and when you do it's going to present you with a download.
It's going to want you to save this key pair to your local machine so go ahead and do that.
Once you've done that from the AWS console attached to this lesson is a one-click deployment link.
So I want you to go ahead and click that link.
That's going to move you to a quick create stack screen.
Everything should be pre-populated.
The stack name should be EC2 instance connect versus SSH.
The key name box should already be pre-populated with A4L which is a key that you just created or one which you already had.
Just move down to the very bottom, check the capabilities box and then click on create stack.
Now you're going to need this to be in a create complete state before you continue with the demo lesson.
So pause the video, wait for your stack to change to create complete and then you're good to continue.
Okay so this stacks now in a create complete status and we're good to continue.
Now if we click on the resources tab you'll see that this has created the standard animals for life VPC and then it's also created a public EC2 instance.
So this is an EC2 instance with a public IP version 4 address that we can use to connect to.
So that's what we're going to do.
So click on services and then select EC2 to move to the EC2 console.
Once you're there click on instances running and you should have a single EC2 instance A4L-publicEC2.
Now the two different ways which I want to demonstrate connecting to this instance in this demo lesson are using a local SSH client and key based authentication and then using the EC2 instance connect method.
And I want to show you how those differ and give you a few hints and tips which might come in useful for production usage and for the exams.
So if we just go ahead and select this instance and then click on the security tab you'll see that we have this single security group which is associated to this instance.
Now make sure the inbound rules is expanded and just have a look at what network traffic is allowed by this security group.
So the first line allows port 80 TCP which is HTTP and it allows that to connect to the instance from any source IP address specifically IP version 4.
We can tell it's IP version 4 because it's 0.0.0.0/0 which represents any IP version 4 address.
Next we allow port 22 using TCP and again using the IP version 4 any IP match and this is the entry which allows SSH to connect into this instance using IP version 4.
And then lastly we have a corresponding line which allows SSH using IP version 6.
So we're allowing any IP address to connect using SSH to this EC2 instance.
And so connecting to it using SSH is relatively simple.
We can right click on this instance and select connect and then choose SSH client and AWS provides us with all of the relevant information.
Now note how under step number three we have this line which is chmod space 400 space a4l.pm.
I want to demonstrate what happens if we attempt to connect without changing the permissions on this key file.
So to do that right at the bottom is an example command to connect to this instance.
So just copy that into your clipboard.
Then I want you to move to your command prompt or terminal.
In my case I'm running macOS so I'm using a terminal application.
Then you'll need to move to the folder where you have the PEM file stored or where you just downloaded it in one of the previous steps.
I'm going to paste in that command which I just copied onto my clipboard.
This is going to use the a4l.pm file as the identity information and then it's going to connect to the instance using the EC2-user local Linux user.
And this is the host name that it's going to connect to.
So this is my EC2 instance.
Now I'm going to press enter and attempt that connection.
First it will ask me to verify the authenticity of this server.
So this is an added security method.
This is getting the fingerprint of this EC2 instance.
And it means that if we independently have a copy of this fingerprint, say from the administrator of the server that we're connecting to, then we can verify that we're connecting to that same server.
Because it's possible that somebody could exploit DNS and replace a legitimate DNS name with one which points at a non-legitimate server.
So that's important.
You can't always rely on a DNS name.
DNS names can be adjusted to point at different IP addresses.
So this fingerprint is a method that you can use to verify that you're actually connecting to the machine or the instance which you think you are.
Now in this case, because we've just created this EC2 instance, we can be relatively certain that it is valid.
So we're just going to go ahead and type yes and press enter.
And then it will try to connect to this instance.
Now immediately in my case, I got an error.
And this error is going to be similar if you're using macOS or Linux.
If you're using Windows, then there is a chance that you will get this error or won't.
And if you do get it, it might look slightly different.
But look for the keyword of permissions.
If you see that you have a permissions problem with your key, then that's the same error as I'm showing on my screen now.
Basically what this means is that the SSH client likes it when the permissions on these keys are restricted, restricted to only the user that they belong to.
Now in my case, the permissions on this file are 644.
And this represents my user, my group, and then everybody.
So this means this key is accessible to other users on my local system.
And that's far too open to be safe when using local SSH.
Now in Windows, you might have a similar situation where other users of your local machine have read permissions on this file.
What this error is telling us to do is to correct those permissions.
So if we go back to the AWS console, this is the command that we need to run to correct those permissions.
So copy that into your clipboard, move back to your terminal, paste that in, and press enter.
And that will correct those permissions.
Now under Windows, the process is that you need to edit the permissions of that file.
So right click properties and then edit the security.
And you need to remove any user access to that file other than your local user.
And that's the same process that we've just done here, only in Windows it's GUI based.
And under Mac OS or Linux, you use CHmod.
So now that we've adjusted those permissions, if I use the up arrow to go back to the previous command and press enter, I'm able to connect to the CC2 instance.
And that's using the SSH client.
To use the SSH client, you need to have network connectivity to the CC2 instance.
And you need to have a valid SSH key pair.
So you need the key stored on your local machine.
Now this can present scalability issues because if you need to have a large team having access to this instance, then everybody in that team need a copy of this key.
And so that does present admin problems if you're doing it at scale.
Now in addition to this, because you're connecting using an SSH client from your local machine, you need to make sure that the security group of this instance allows connections from your local machines.
So in this case, it allows connections from any source IP address into this instance.
And so that's valid for my IP address.
You need to make sure that the security group on whichever instance you're attempting to connect to allows your IP address as a minimum.
Now another method that you can use to connect to EC2 is EC2 instance connect.
Now to use that, we right click, we select connect, and we have a number of options at the top.
One of these is the SSH client that we've just used.
Another one is EC2 instance connect.
So if we select this option, we're able to connect to this instance.
It shows us the instance ID, it shows us the public IP address, and it shows us the user to connect into this instance with.
Now AWS attempt to automatically determine the correct user to use.
So when you launch an instance using one of the default AMIs, then it tends to pick correctly.
However, if you generate your own custom AMI, it often doesn't guess correctly.
And so you need to make sure that you're using the correct username when connecting using this method.
But once you've got the correct username, you can just go ahead and click on connect, and then it will open a connection to that instance using your web browser.
It'll take a few moments to connect, but once it has connected, you'll be placed at the terminal of this EC2 instance in exactly the same way as you were when using your local SSH.
Now one difference you might have noticed is at no point where you prompted to provide a key.
When you're using EC2 instance connect, you're using AWS permissions to connect into this instance.
So because we're logged in using an admin user, we have those permissions, but you do need relevant permissions added to the identity of whoever is using instance connect to be able to connect into the instance.
So this is managed using identity policies on the user, the group or the role, which is attempting to access this instance.
Now one important element of this, which I want to demonstrate, if we go back to instances and we select the instance, click on security, and then click on the security group, which is associated with this instance.
Scroll down, click on edit inbound rules, and then I want you to locate the inbound rule for IP version 4 SSH, SSH TCP 22, and then it's using this catchall, so 0.0.0.0/0, which represents any IP version 4 address.
So go ahead and click on the cross to remove that, and then on that same line in the source area, click on this drop down and change it to my IP.
So this is my IP address, yours will be different, but then we're going to go ahead and save that rule.
Now just close down the tab that you've got connected to instance connect, move back to the terminal, and type exit to disconnect from that instance, and then just rerun the previous command.
So connect back to that instance using your local SSH client.
You'll find that it does reconnect because logically enough, this connection is coming from your local IP address, and you've changed the security group to allow connections from that address, so it makes sense that this connection still works.
Moving back to the console though, let's go to the EC2 dashboard, go to running instances, right click on this instance, go to connect, select EC2 instance connect, and then click on connect and just observe what happens.
Now you might have spent a few minutes waiting for this to connect, and you'll note that it doesn't connect.
Now this might seem strange at this point because you're connecting from a web browser, which is running on your local machine.
So it makes sense that if you can connect from your local SSH client, which is also running on your local machine, you should be able to connect using EC2 instance connect.
Now this might seem logical, but the crucial thing about EC2 instance connect is that it's not actually originating connections from your local machine.
What's happening is that you're making a connection through to AWS, and then once your connection arrives at AWS, the EC2 instance connect service is then connecting to the EC2 instance.
Now what you've just done is you've edited the security group of this instance to only allow your local IP address to connect, and this means that the EC2 instance connect service can no longer connect to this instance.
So what you need in order to allow the EC2 instance connect service to work is you either need to allow every source IP address, so 0.0.0.0.0/0, but of course that's bad practice for production usage.
It's much more secure if you go to this URL, and I'll make sure that I include this attached to this lesson.
This is a list of all of the different IP ranges which AWS use for their services.
Now because I have this open in Firefox, it might look a little bit different.
If I just go to raw data, that might look the same as your browser.
If you're using Firefox, you have the ability to open this as a JSON document.
Both of them show the same data, but when it's JSON, you have the ability to collapse these individual components.
But the main point about this document is that this contains a list of all of the different IP addresses which are used in each different region for each different service.
So if we wanted to allow EC2 instance connect for a particular region, then we might search for instance, locate any of these items which have EC2 instance connect as the service, and then just move through them looking for the one which matches the region that we're using.
Now in my case, I'm using US East One, so I'd scroll through all of these IP address ranges looking for US East One.
There we go, I've located it.
It's using this IP address range.
So I might copy this into my clipboard, move back to the EC2 console, select the instance, click on security, select the security group of this instance, scroll down, edit the inbound rules, remove the entry for my IP address, paste in the entry for the EC2 instance connect service, and then save that rule.
And now what you'll find if you move back to your terminal and try to interact with this instance, you might be able to initially because the connection is still established, but if you exit and then attempt to reconnect, this time you'll see that you won't be able to connect because now your local IP address is no longer allowed to connect to this instance.
However, if you move back to the AWS console, go to the dashboard and then instance is running, right click on the instance and put connect, select instance connect and then click on connect.
Now you'll be allowed to connect using EC2 instance connect.
And the reason for that just to reiterate is that you've just edited the security group of this EC2 instance and you've allowed the IP address range of the EC2 instance connect service.
So now you can connect to this instance and you could do so at scale using AWS permissions.
So I just wanted to demonstrate how both of those connection methods work, both instance connect and using a local SSH client.
That's everything I wanted to cover.
So just go ahead and move back to the CloudFormation console, select this stack that you created using the one click deployment, click on delete and then confirm that process.
And that will clear up all of the infrastructure that you've used in this demo lesson.
At this point though, that's everything I wanted to cover.
So go ahead, complete this video and when you're ready, I'll look forward to you joining me in the next.
-
-
klik.gr klik.gr
-
η Αίγυπτος επίσης έκλεισε τα σύνορά της με τη Γάζα
Μιση αληθεια.
Η Αιγυπτος συνηθως εκλεινε τα συνορα με Γαζα ειτε επειτα απο πιεση των ΗΠΑ και του Ισραηλ, ειτε επειτα απο ενοπλες επιθεσεις της Χαμας. Απο το 2021 η Αιγυπτος τα ειχε ανοιξει για παντα.
Ομως παντα τον ελεγχο του τι μπαινει τον ασκουσαν οι Ισραηλινοι (σε συμφωνια με την Αιγυπτο) και διέταζαν τα κλεισιματα, με εφαρμογη των συμφωνιων του Camp David (1979):
The Philadelphi Accord between Israel and Egypt, based on the principles of the 1979 peace treaty, turned over border control to Egypt, while the supply of arms to the Palestinian Authority was subject to Israeli consent.
Under the Agreed Principles for Rafah Crossing, part of the Agreement on Movement and Access (AMA) of 15 November 2005, EUBAM was responsible for monitoring the Border Crossing. The agreement ensured Israel authority to dispute entrance by any person.[14]
...after Hamas' takeover of the Gaza Strip (2007) it was closed permanently except for infrequent limited openings by Egypt.
Απο το 2024 τον ελεγχο του περασματος τον εχουν αμιγως Ισραηλινα στρατευματα, και ειναι κλειστο ακομη και σε ανθρωπιστικη βοηθεια.
-
παρά τις διαρκείς προκλήσεις απ’ τη Γάζα
Η Γαζα ειναι μια ανοιχτη φυλακη οπου απαγορευονται οι διαφορων ειδους εισαγωγες φαρμακων και καθημερινων υλικων, το ψαρεμα, με ελαχιστο νερο, ελαχιστη τροφη, κα, ενω οι επικοι στις γειτονικες πολεις η σε βαρκες απολαμβανουν το γενοκτονικο θεαμα καθε φορα που τους βομβαρδιζουν ("Mowing the grass").
Δεν ειναι "προκλησεις", ειναι αντισταση στον κατακτητη_.
-
το Ισραήλ έσφαζε τους Άραβες που κατοικούσαν στα εδάφη του. Υπήρξαν μεμονωμένες συγκρούσεις αλλά ήταν ακριβώς αυτό, μεμονωμένες.
Ο Ilan Pape απαντα στο ποσο δηθεν "μεμονωμένες" ηταν οι συγκρούσεις.
-
όχι μόνο είχαν παρελθόν
Ειπαμε, η σχεση των Ασκεναζι με το Λεβαντε ειναι συγκεχυμενη:
The origins of early AJ, as well as the history of admixture events that have shaped their gene pool, are subject to debate (Data S1, section 1). Genetic evidence supports a mixed Middle Eastern (ME) and European (EU) ancestry in AJ. This is based on uniparental markers with origins in either region (Behar et al., 2006, 2017; Costa et al., 2013; Hammer et al., 2000, 2009; Nebel et al., 2001)
-
όχι ΔΕΝ ήταν Έλληνες οι Πελασγοί
Ξεπερασμενη θεωρια:
the Greek etymology of Pelasgian terms mentioned in Herodotus such as θεοί (derived from θέντες) indicates that the "Pelasgians spoke a language at least 'akin to' Greek".
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata)
I think that the importance of metadata and the contextual power it holds is not often recognised. It adds another layer of depth to a post by including background information regarding the post. In addition, there is a sense of ownership of the post which is included as a part of metadata. However through a different perspective, it can also be deemed controversial as it is to some extent quite intrusive as it does expose user location, movements, behavioural insights and time stamps which a lot of users may not approve of.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Now this is an overview of all of the different categories of instances, and then for each category, the most popular or current generation types that are available.
Now I created this with the hope that it will help you retain this information.
So this is the type of thing that I would generally print out or keep an electronic copy of and refer to constantly as we go through the course.
By doing so, whenever we talk about particular size and type and generation of instance, if you refer to the details in notes column, you'll be able to start making a mental association between the type and then what additional features you get.
So for example, if we look at the general error post category, we've got three main entries in that category.
We've got the A1 and M6G types, and these are a specific type of instance that are based on ARM processors.
So the A1 uses the AWS designed Graviton ARM processor, and the M6G uses the generation 2, so Graviton 2 ARM based processor.
And using ARM based processors, as long as you've got operating systems and applications that can run under the architecture, they can be very efficient.
So you can use smaller instances with lower cost and achieve really great levels of performance.
The T3 and T3A instance types, they're burstable instances.
So the assumption with those type of instances is that your normal CPU load will be fairly low, and you have an allocation of burst credits that allows you to burst up to higher levels occasionally, but then return to that normally low CPU level.
So this type of instance, T3 and T3A, are really good for machines which have low normal loads with occasional bursts, and they're a lot cheaper than the other type of general purpose instances.
Then we've got M5, M5A and M5N.
So M5 is your starting point, M5A uses the AMD architecture, whereas normal M5s just use Intel, and these are your steady state general instances.
So if you don't have a burst requirement, if you're running a certain type of application server, which requires consistent steady state CPU, then you might use the M5 type.
So maybe a heavily used exchange email server that runs normally at 60% CPU utilization, that might be a good candidate for M5.
But if you've got a domain controller or an email relay server that normally runs maybe at 2%, 3% with occasional burst, up to 20% or 30% or 40%, then you might want to run a T type instance.
Now, not to go through all of these in detail, we've got the computer optimized category with the C5 and C5N, and they go for media encoding, scientific modeling, gaming servers, general machine learning.
For memory optimized, we start off with R5 and R5A.
If you want to use really large in-memory applications, you've got the X1 and the X1E.
If you want the highest memory of all A to the U instances, you've got the high memory series.
You've got the Z1D, which comes with large memory and NVMe storage.
Then Accelerate Computing, these are the ones that come with these additional capabilities.
So the P3 type and G4 type, those come with different types of GPUs.
So the P type is great for parallel processing and machine learning.
The P type is kind of okay for machine learning and much better for graphics intensive requirements.
You've got the F1 type, which comes with field programmable gate rays, which is great for genomics, financial analysis and big data, anything where you want to program the hardware to do specific tasks.
You've got the Inf1 type, which is relatively new, custom designed for machine learning, so recommendation for casting analysis, voice conversation, anything machine learning related, look at using that type, and then storage optimalities.
So these come with high speed, local storage, and depending on the type you pick, you can get high throughput or maximum IO or somewhere in between.
So keep this somewhere safe, printed out, keep it electronically, and as we go through the course and use the different type of instances, refer to this and start making the mental association between what a category is, what instance types are in that category, and then what benefits they provide.
Now again, don't worry about memorizing all of this in the exam, you don't need it, I'll draw out anything specific that you need as we go through the course, but just try to get a feel for which letters are in which categories.
If that's the minimum that you can do, if I can give you a letter like the T type, or the C type, or the R type, if you can try and understand the mental association which category that goes into, that will be a great step.
And there are ways we can do this, we can make these associations, so C stands for compute, R stands for RAM, which is a way for describing memory, we've got I which stands for IO, D which stands for dense storage, G which stands for GPU, P which stands for parallel processing, there's lots of different mind tricks and mental association that we can do, and as we go through the course, I'll try and help you with that, but as a minimum, either print this out or store it somewhere safe, and refer to it as we go through the course.
The key thing to understand though is how picking an instance type is specific to a particular type of computing scenario.
So if you've got an application that requires maximum CPU, look at compute optimized, if you need memory, look at memory optimized, if you've got a specific type of acceleration, look at accelerated computing, start off in the general purpose instance types, and then go out from there as you've got a particular requirement to.
Now before we finish up, I did want to demonstrate two really useful sites that I refer to constantly, I'll include links to both of these in a lesson text.
The first one is the Amazon documentation site for Amazon EC2 instance types, this gives you a follow-up view of all the different categories of EC2 instances.
You can look in a category, a particular family and generation of instance, so T3, and then in there you can see the use cases that this is suited to, any particular features, and then a list of each instance size and exactly what allocation of resources that you get and then any particular notes that you need to be aware of.
So this is definitely something you should refer to constantly, especially if you're selecting instances to use for production usage.
This other website is something similar, it's EC2incidences.info, and it provides a really great sortable list which can be filtered and adjusted with different attributes and columns, which give you an overview of exactly what each instance provides.
So you can either search for a particular type of instance, maybe a T3, and then see all the different sizes and capabilities of T3, as well as that you can see the different costings for those instance types, so Linux on demand, Linux reserve, Windows on demand, Windows reserve, and we'll talk about what this reserve column is later in the course.
You can also click on columns and show different data for these different instance types, so if I scroll down you can see which offer EBS optimization, you can see which operating systems these different instances are compatible with, you've got a lot of options to manipulate this data.
I find this to be one of the most useful third-party sites, I always refer back to this when I'm doing any consultancy, so this is a really great site.
And again it will go into the lesson text so definitely as you're going through the course, experiments and have a play around with this data, and just start to get familiar with the different capabilities of the different types of EC2 instances.
With that being said, that's everything I wanted to cover in this lesson, you've done really well, and there's been a lot of theory, but it will come in handy in the exam and real-world version usage.
So go ahead, complete this video, and when you're ready, you can join me in the next.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
“Design justice is a framework for analysis of how design distributes benefits and burdens between various groups of people. Design justice focuses explicitly on the ways that design reproduces and/or challenges the matrix of domination (white supremacy, heteropatriarchy, capitalism, ableism, settler colonialism, and other forms of structural inequality).”
Although I haven't heard this term, I believed in this idea wholeheartedly. It's hard to truly understand someone's perspective and no one understands a perspective than someone living it. It is important to receive feedback from all different types of people to create the best product possible.
-
-
pressbooks.pub pressbooks.pub
-
Who is the Trustee?
who to trust?
-
in the world which exists for him
in the world that exists for man. ok.
-
all history resolves itself very easily into the biography of a few stout and earnest persons.
few?
-
the soldier should receive his supply of corn, grind it in his hand-mill, and bake his bread himself.
someone still has to supply him the corn, making him rely on someone else for his food, thus breaking self-reliance completely down
-
a true man belongs to no other time or place, but is the centre of things. Where he is, there is nature.
actually, I'm quite intrigued about WHAT man gets to be in the center of everything..... who? and, what would happen if every man chose to be in the spotlight? then what?
-
Shakspeare will never be made by the study of Shakspeare
Then why preserve or analyze Shakespeare 100s of years later if it is not made by its study?
-
a true man belongs to no other time or place, but is the centre of things. Where he is, there is nature.
classsicccccccccc
-
Welcome evermore to gods and men is the self-helping man. For him all doors are flung wide
Is man self helping if he relies on a deity?
-
Consider whether you have satisfied your relations to father, mother, cousin, neighbour, town, cat, and dog; whether any of these can upbraid you.
Contradictory advice as the Emerson who supported christianity, is actively telling people to not trust their father and mother in direct violation of the fifth commandment
-
If we cannot at once rise to the sanctities of obedience and faith, let us at least resist our temptations
Won't human impulse and genetic desires make this next to near impossible
-
But now we are a mob
Mobs and group thinking is not new for the 1800's because of past historical examples in places like Greece.
-
‘I think,’ ‘I am,’
It that not the result of what self reliance may preform?
-
It seems to be a rule of wisdom never to rely on your memory alone, scarcely even in acts of pure memory
Would it not help then to rely on others to take the load of remembering so much?
-
Meantime nature is not slow to equip us in the prison-uniform of the party to which we adhere.
How do we get freedom from nature then?
-
but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.
How can this be beneficial to reject others?
-
There is the man and his virtues.
Assumption that all men are inherently virtuous
-
feminine rage
this might be a wrong, but is Emerson using "feminine rage" to further the idea that women are emotional and angry? Like is that why the use of feminine is used here without the context of any woman being involved?
-
truth is handsomer than the affectation of love
Should there not be a balance to avoid a life of misery?
-
“Man is his own star; and the soul that can Render an honest and a perfect man, Commands all light, all influence, all fate; Nothing to him falls early or too late.
I'm confused about what this exactly is saying.
-
but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.
regardless of outside noise, the ability to stay content and focus is the key to happiness.
-
Thoughtless people contradict as readily the statement of perceptions as of opinions, or rather much more readily; for, they do not distinguish between perception and notion.
It's important to know the difference between the objective and the subjective!
-
It is easy in the world to live after the world’s opinion; it is easy in solitude to live after our own
the voice of the world can really take a toll on what you believe or wish to do. Once you live by your own wishes and ideas, everything tunes out.
-
What I must do is all that concerns me, not what the people think
minding his own business, expectation that others will do the same.
-
The world has been instructed by its kings, who have so magnetized the eyes of nations. It has been taught by this colossal symbol the mutual reverence that is due from man to man. The joyful loyalty with which men have everywhere suffered the king, the noble, or the great proprietor to walk among them by a law of his own, make his own scale of men and things, and reverse theirs, pay for benefits not with money but with honor, and represent the law in his person, was the hieroglyphic by which they obscurely signified their consciousness of their own right and comeliness, the right of every man.
I wonder what he'd think of people like Elon Musk...I feel like the author would own a Cyber Truck. /j
-
all history resolves itself very easily into the biography of a few stout and earnest persons.
I don't agree with this - I feel like history is a culmination of an incredible amount of people, ideas, thoughts, and movements. I feel like this can also be really exclusive of people who were still important but not put into general historical texts because of not being a cis straight white Christian man?
-
Greatness appeals to the future.
Reminds me of how history is recorded, and who we remember in its pages.
-
Your genuine action will explain itself, and will explain your other genuine actions. Your conformity explains nothing.
Be genuine and hold true to your values, they chart the path.
-
My book should smell of pines and resound with the hum of insects.
I love the vibe of the book, but I'd hate the insects.
-
To be great is to be misunderstood.
Looking at all the banned books!
-
sympathy or the hatred of hundreds
duality with response or being percieved.
-
A boy
what about the girls... and the women? I understand when this was written and published, but it's like there is no trace of women here, I've been waiting for one reference. Not to put words in Emerson's mouth, but can a woman not be "genius"? Is this preaching of following your dreams and being courageous only affiliate with boys and men?
-
the forced smile which we put on in company where we do not feel at ease in answer to conversation which does not interest us.
Ah, the mutual hate of small-talk.
-
Do not think the youth has no force, because he cannot speak to you and me.
Just because they're young doesn't make them unable to be a noble part of society. The youth DOES grow, they're not young forever...
-
I hear a preacher announce for his text and topic the expediency of one of the institutions of his church. Do I not know beforehand that not possibly can he say a new and spontaneous word? Do I not know that, with all this ostentation of examining the grounds of the institution, he will do no such thing?
This feels a little main character-y? People can have different opinions and surprise you, just like you can do the same.
-
The objection to conforming to usages that have become dead to you is, that it scatters your force. It loses your time and blurs the impression of your character.
Remaining strictly neutral on an issue says just as much about you as picking a side does.
-
Trust thyself: every heart vibrates to that iron string. Accept the place the divine providence has found for you,
try. he is sooooo motivational lol
-
God will not have his work made manifest by cowards
the idea that God created everyone for a reason? That everyone has a special reason to existing?
-
Rough and graceless would be such greeting, but truth is handsomer than the affectation of love.
I personally agree with this - I'd rather a blunt truth than something sugarcoated.
-
abide by our spontaneous impression with good-humored inflexibility then most when the whole cry of voices is on the other side.
be yourself, fully.
-
If malice and vanity wear the coat of philanthropy, shall that pass?
This reminds me of the concept of effective altruism!
-
Nothing is at last sacred but the integrity of your own mind.
Phrasing is a little confusing, but I think this means that nothing is more sacred than the integrity of your own mind? If that's correct, I like that sentiment!
-
and spoke not what men but what they thought. A man should learn to detect and watch that gleam of light which flashes across his mind from within, more than the lustre of the firmament of bards and sages
Ne te quaesiveris extra
-
Society everywhere is in conspiracy against the manhood of every one of its members. Society is a joint-stock company, in which the members agree, for the better securing of his bread to each shareholder, to surrender the liberty and culture of the eater.
Rip the author, you would've loved the Joker.
-
trumpets of the Last Judgment.
Biblical reference? I dont know much about religion.
-
The nonchalance of boys who are sure of a dinner,
"Those entitled youngins!"
-
Chaos and the Dark
I like the chaos and the dark! :))
-
It is a deliverance which does not deliver.
Contradictory...I'm forgetting the name of it, but it reminds me of that technique to create a sense of credibility by saying something backwards. Like "He was not smart because he was kind, but he was kind because he was smart." Does that make sense?
-
The eye was placed where one ray should fall, that it might testify of that particular ray.
Is this referring to how our individual perspectives are just one among many?
-
spoke not what men but what they thought. A man should learn to detect and watch that gleam of light which flashes across his mind from within, more than the lustre of the firmament of bards and sages.
Goes into that idea of genius!
-
trumpets of the Last Judgment
Interesting Biblical reference!
-
To believe your own thought, to believe that what is true for you in your private heart is true for all men, — that is genius.
Genius is defined in this as the ability to believe in your own thoughts and stand by your personal convictions? Debatable, but interesting perspective!
-
“Man is his own star; and the soul that can Render an honest and a perfect man, Commands all light, all influence, all fate; Nothing to him falls early or too late.
Why are "Render", "Commands", and "Nothing" capitalized?
-
Ne te quaesiveris extra
Google Translate says it means either "Don't be afraid of anything" in Spanish, or "Don't ask yourself out" in Latin...it's probably Spanish, but the Latin version made me laugh.
-
“Man is his own star; and the soul that can Render an honest and a perfect man, Commands all light, all influence, all fate; Nothing to him falls early or too late.
self reliance...Things happen they way they're supposed to.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
有多种保存日期和时间的方法。有些选项包括一系列数字(年、月、日、时、分和秒),或包含所有这些信息的字符串。有时只保存日期,不保存时间信息,有时时间信息会包含时区。
There are different benefits to different ways of representing time, such as including the region in the time information, which is very helpful for people in foreign countries so that we can be sure of when the tweet was actually sent. For example, if I'm studying in the US and my family sends a tweet from China, not including the time in the region would make it impossible to determine exactly which “yesterday” the tweet was sent.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I'm going to talk about the various different types of EC2 instances.
I've described an EC2 instance before as an operating system plus an allocation of resources.
Well, by selecting an instance type and size, you have granular control over what that resource configuration is, picking appropriate resource amounts and instance capabilities to mean the difference between a well-performing system and one which causes a bad customer experience.
Don't expect this lesson though to give you all the answers.
Understanding instance types is something which will guide your decision-making process.
Given a situation, two AWS people might select two different instance types for the same implementation.
The key takeaway from this lesson will be that you don't make any bad decisions and you have an awareness of the strengths and weaknesses of the different types of instances.
Now, I've seen this occasionally feature on the exam in a form where you're presented with a performance problem and one answer is to change the instance type.
So, to minimum with this lesson, I'd like you to be able to answer that type of question.
So, know for example whether a C type instance is better in a certain situation than an M type instance.
If that's what I want to achieve, we've got a lot to get through, so let's get started.
At a really high level, when you choose an EC2 instance type, you're doing so to influence a few different things.
First, logically, the raw amount of resources that you get.
So, that's virtual CPU, memory, local storage capacity and the type of that storage.
But beyond the raw amount, it's also the ratios.
Some type of instances give you more of one and less of the other.
Instance types suited to compute applications, for instance, might give you more CPU and less memory for a given dollar spend.
An instance designed for in-memory caching might be the reverse.
They prioritize memory and give you lots of that for every dollar that you spend.
Picking instance types and sizes, of course, influences the raw amount that you pay per minute.
So, you need to keep that in mind.
I'm going to demonstrate a number of tools that will help you visualize how much something's going to cost, as well as what features you get with it.
So, look at that at the end of the lesson.
The instance type also influences the amount of network bandwidth for storage and data networking capability that you get.
So, this is really important.
When we move on to talking about elastic block store, for example, that's a network-based storage product in AWS.
And so, for certain situations, you might provision volumes with a really high level of performance.
But if you don't select an instance appropriately and pick something that doesn't provide enough storage network bandwidth, then the instance itself will be the limiting factor.
So, you need to make sure you're aware of the different types of performance that you'll get from the different instances.
Picking an instance type also influences the architecture of the hardware that the instance has run on and potentially the vendor.
So, you might be looking at the difference between an ARM architecture or an X86 architecture.
You might be picking an instance type that provides Intel-based CPUs or AMD CPUs.
Instance type selection can influence in a very nuanced and granular way exactly what hardware you get access to.
Picking an appropriate type of instance also influences any additional features and capabilities that you get with that instance.
And this might be things such as GPUs for graphics processing or FPGAs, which are field-programmeable gator-rays.
And if you think of these as a special type of CPU that you can program the hardware to perform exactly how you want.
So, it's a super customizable piece of compute hardware.
And so, certain types of instances come up with these additional capabilities.
So, it might come with an allocation of GPUs or it might come with a certain capacity of FPGAs.
And some instance types don't come with either.
You need to learn which to pick for a given type of workload.
Easy to instance is a group into five main categories which help you select an instance type based on a certain type of workload.
But we've got five main categories.
The first is general purpose.
And this is and always should be your starting point.
Instances which fall into this category are designed for your default steady-state workloads.
They've got fairly even resource ratios, so generally assigned in an appropriate way.
So, for a given type of workload, you get an appropriate amount of CPU and a certain amount of memory which matches that amount of CPU.
So, instances in the general purpose category should be used as your default and you only move away from that if you've got a specific workload requirement.
We've also got the compute optimized category and instances that are in this category are designed for media processing, high-performance computing, scientific modeling, gaming, machine learning.
And they provide access to the latest high-performance CPUs.
And they generally offer a ratio and more CPU is offered in memory for a given price point.
The memory optimized category is logically the inverse of this, so offering large memory allocations for a given dollar or CPU amount.
This category is ideal for applications which need to work with large in-memory data sets, maybe in-memory caching or some other specific types of database workloads.
The accelerated computing category is where these additional capabilities come into play, such as dedicated GPUs for high-scale parallel processing and modeling, or the custom programmable hardware, such as FPGAs.
Now, these are niche, but if you're in one of the situations where you need them, then you know you need them.
So, when you've got specific niche requirements, the instance type you need to select is often in the accelerated computing category.
Finally, there's the storage optimized category and instances in this category generally provide large amounts of superfast local storage, either designed for high sequential transfer rates or to provide massive amounts of IO operations per second.
And this category is great for applications with serious demands on sequential and random IO, so things like data warehousing, elastic search, and certain types of analytic workloads.
Now, one of the most confusing things about EC2 is the naming scheme of the instance types.
This is an example of a type of EC2 instance.
While it might initially look frustrating, once you understand it, it's not that difficult to understand.
So, while our friend Bob is a bit frustrated at understanding difficulty, understanding exactly what this means, by the end of this part of the lesson, you will understand how to decode EC2 instance types.
The whole thing, end to end, so R5, DN, .8x, large, this is known as the instance type.
The whole thing is the instance type.
If a member of your operations team asks you what instance you need or what instance type you need, if you use the full instance type, you unambiguously communicate exactly what you need.
It's a mouthful to say R5, DN, .8x, large, but it's precise and we like precision.
So, when in doubt, always give the full instance type an answer to any question.
The letter at the start is the instance family.
Now, there are lots of examples of this, the T family, the M family, the I family, and the R family.
There's lots more, but each of these are designed for a specific type or types of computing.
Nobody expects you to remember all the details of all of these different families, but if you can start to try to remember the important ones, I'll mention these as we go through the course, then it will put you in a great position in the exam.
If you do have any questions where you need to identify if an instance type is used appropriately or not, as we go through the course and I give demonstrations which might be using different instance families, I will be giving you an overview of their strengths and their weaknesses.
The next part is the generation.
So, the number five in this case is the generation.
AWS iterate often.
So, if you see instance type starting with R5 or C4 as two examples, the C or the R, as you now know, is the instance family and the number is the generation.
So, the C4, for example, is the fourth generation of the C family of instance.
That might be the current generation, but then AWS come along and replace it with the C5, which is generation five, the fifth generation, which might bring with it better hardware and better price to performance.
Generally, with AWS, always select the most recent generation.
It almost always provides the best price to performance option.
The only real reason is not to immediately use the latest generation, as if it's not available in your particular region or if your business has fairly rigorous test processes that need to be completed before you get the approval to use a particular new type of instance.
So, that's the R-part cupboard, which is the family, and the five-part cupboard, which is the generation.
Now, across to the other side, we've got the size.
So, in this case, 8x large or 8x large, this is the instance size.
Within a family and a generation, there are always multiple sizes of that family and generation, which determine how much memory and how much CPU the instance is allocated with.
Now, there's a logical and often linear relationship between these sizes.
So, depending on the family and generation, the starting point can be anywhere as small as the nano.
Next to the nano, there's micro, then small, then medium, large, extra large, 2x large, 4x large, 8x large, and so on.
Now, keep in mind, there's often a price premium towards the higher end.
So, it's often better to scale systems by using a larger number of smaller instance sizes.
But more on that later when we talk about high availability and scaling.
Just be aware, as far as this section of the course goes, that for a given instance family and generation, you're able to select from multiple different sizes.
Now, the bit which is in the middle, this can vary.
There might be no letters between the generation and size, but there's often a collection of letters which denote additional capabilities.
Common examples include a lowercase a, which signifies amdcpu, so lowercase b, which signifies NVMe storage, lowercase n, which signifies network optimized, lowercase e, for extra capacity, which could be RAM or storage.
So, these additional capabilities are not things that you need to memorize, but as you get experience using AWS, you should definitely try to mentally associate them in your mind with what extra capabilities they provide.
Because time is limited in an exam, the more that you can commit to memory than know instinctively, the better you'll be.
Okay, so this is the end of part one of this lesson.
It was getting a little bit on the long side, and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So, go ahead, complete the video, and when you're ready, join me in part two.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
The study from Frank and colleagues reports potentially important cryo-EM observations of mouse glutamatergic synapses isolated from adult mammalian brains. The authors used a combination of mouse genetics to generate PSD95-GFP labeling in vivo, a rapid synaptosome isolation and cryo-protectant strategy, and cryogenic correlated light-electron microscopy (cryoCLEM) to record tomograms of synapses, which together provide convincing support for their conclusions. Controversially, the authors report that forebrain glutamatergic synapses do not contain postsynaptic "densities" (PSD), a defining feature of synapse structure identified in chemically-fixed and resin-embedded brain samples. The work questions a long-standing concept in neurobiology and is primarily of interest to specialists in synaptic structure and function.
-
-
engl252fa24.commons.gc.cuny.edu engl252fa24.commons.gc.cuny.edu
-
Thusly, the politics–often explicitly stated by Butler’s characters or embedded within Mutu’s visual fields–are irreducible to the language of citizenship, cultural particularity, and national governance as we currently conceive of it.
this sets up fraziers argument on the idea that the politics represented in both works are more complicated and cant be 'simply' summarized using familiar terms, which I do really agree with. Butler's characters express these ideas directly. In parable of the sower, Lauren with her very outspoken nature on serious real life issues, through Lauren’s own philosophy of Earthseed, Butler creates a new perspective of understanding survival, leadership, and responsibility that goes above the more conventional political discourse both during the period it was published and currently. Lauren’s voice in the novel acts as a critique of already existing political systems while offering an alternative route that majorly reflects an intense comprehension of power, government, and aligence based in adaptability and inclusivity. while Mutu's visual work conveys them more implicitly, really resisting the idea of being put in a box.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Binary consisting of 0s and 1s make it easy to represent true and false values, where 1 often represents true and 0 represents false. Most programming languages have built-in ways of representing True and False values.
This was a shock to me, I didn't know much about binary, but it turns out that it has both true and false states. And I also just learned about the direct mapping of boolean types to binary, which makes the whole data analysis much simpler to compute.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, now that we've covered virtualization at a high level, I want to focus on the architecture of the EC2 product in more detail.
EC2 is one of the services you'll use most often in AWS since one which features on a lot of exam questions.
So let's get started.
First thing, let's cover some key, high level architectural points about EC2.
EC2 instances are virtual machines, so this means an operating system plus an allocation of resources such as virtual CPU, memory, potential some local storage, maybe some network storage, and access to other hardware such as networking and graphics processing units.
EC2 instances run on EC2 hosts, and these are physical servers hardware which AWS manages.
These hosts are either shared hosts or dedicated hosts.
Shared hosts are hosts which are shared across different AWS customers, so you don't get any ownership of the hardware and you pay for the individual instances based on how long you run them for and what resources they have allocated.
It's important to understand, though, that every customer when using shared hosts are isolated from each other, so there's no visibility of it being shared.
There's no interaction between different customers, even if you're using the same shared host.
And shared hosts are the default.
With dedicated hosts, you're paying for the entire host, not the instances which run on it.
It's yours.
It's dedicated to your account, and you don't have to share it with any other customers.
So if you pay for a dedicated host, you pay for that entire host, you don't pay for any instances running on it, and you don't share it with other AWS customers.
EC2 is an availability zone resilient service.
The reason for this is that hosts themselves run inside a single availability zone.
So if that availability zone fails, the hosts inside that availability zone could fail, and any instances running on any hosts that fail will themselves fail.
So as a solutions architect, you have to assume if an AZ fails, then at least some and probably all of the instances that are running inside that availability zone will also fail or be heavily impacted.
Now let's look at how this looks visually.
So this is a simplification of the US East One region.
I've only got two AZs represented, AZA and AZB.
And in AZA, I've represented that I've got two subnet, subnet A and subnet B.
Now inside each of these availability zones is an EC2 host.
Now these EC2 hosts, they run within a single AZ.
I'm going to keep repeating that because it's critical for the exam and you're thinking about EC2 in the exam.
Keep thinking about it being an AZ resilient service.
If you see EC2 mentioned in an exam, see if you can locate the availability zone details because that might factor into the correct answer.
Now EC2 hosts have some local hardware, logically CPU and memory, which you should be aware of, but also they have some local storage called the instance store.
The instance store is temporary.
If an instance is running on a particular host, depending on the type of the instance, it might be able to utilize this instance store.
But if the instance moves off this host to another one, then that storage is lost.
And they also have two types of networking, storage networking and data networking.
When instances are provisioned into a specific subnet within a VPC, what's actually happening is that a primary elastic network interface is provisioned in a subnet, which maps to the physical hardware on the EC2 host.
Remember, subnets are also in one specific availability zone.
Instances can have multiple network interfaces, even in different subnets, as long as they're in the same availability zone.
Everything about EC2 is focused around this architecture, the fact that it runs in one specific availability zone.
Now EC2 can make use of remote storage so an EC2 host can connect to the elastic block store, which is known as EBS.
The elastic block store service also runs inside a specific availability zone.
So the service running inside availability zone A is different than the one running inside availability zone B, and you can't access them cross zone.
EBS lets you allocate volumes and volumes of portions of persistent storage, and these can be allocated to instances in the same availability zone.
So again, it's another area where the availability zone matters.
What I'm trying to do by keeping repeating availability zone over and over again is to paint a picture of a service which is very reliant on the availability zone that it's running in.
The host is in an availability zone.
The network is per availability zone.
The persistent storage is per availability zone.
Even availability zone in AWS experiences major issues, it impacts all of those things.
Now an instance runs on a specific host, and if you restart the instance, it will stay on a host.
Instances stay on a host until one of two things happen.
Firstly, the host fails or is taken down for maintenance for some reason by AWS.
Or secondly, if an instance is stopped and then started, and that's different than just restarting, so I'm focusing on an instance being stopped and then being started, so not just a restart.
If either of those things happen, then an instance will be relocated to another host, but that host will also be in the same availability zone.
Instances cannot natively move between availability zones.
Everything about them, their hardware, networking and storage is locked inside one specific availability zone.
Now there are ways you can do a migration, but it essentially means taking a copy of an instance and creating a brand new one in a different availability zone, and I'll be covering that later in this section where I talk about snapshots and AMIs.
What you can never do is connect network interfaces or EBS storage located in one availability zone to an EC2 instance located in another.
EC2 and EBS are both availability zone services.
They're isolated.
You cannot cross AZs with instances or with EBS volumes.
Now instances running on an EC2 host share the resources of that host.
And instances of different sizes can share a host, but generally instances of the same type and generation will occupy the same host.
And I'll be talking in much more detail about instance types and sizes and generations in a lesson that's coming up very soon.
But when you think about an EC2 host, think that it's from a certain year and includes a certain class of processor and a certain type of memory and a certain type and configuration of storage.
And instances are also created with different generations, different versions that you apply specific types of CPU memory and storage.
So it's logical that if you provision two different types of instances, they may well end up on two different types of hosts.
So a host generally has lots of different instances from different customers of the same type, but different sizes.
So before we finish up this lesson, I want to answer a question.
That question is what's EC2 good for?
So what types of situations might you use EC2 for?
And this is equally valuable when you're evaluating a technical architecture while you're answering questions in the exam.
So first, EC2 is great when you've got a traditional OS and application compute need.
So if you've got an application that requires to be running on a certain operating system at a certain runtime with certain configuration, maybe your internal technical staff are used to that configuration, or maybe your vendor has a certain set of support requirements.
EC2 is a perfect use case for this type of scenario.
And it's also great for any long running compute needs.
There are lots of other services inside AWS that provide compute services, but many of these have got runtime limits.
So you can't leave these things running consistently for one year or two years.
With EC2, it's designed for persistent, long running compute requirements.
So if you have an application that runs constantly 24/7, 365, and needs to be running on a normal operating system, Linux or Windows, then EC2 is the default and obvious choice for this.
If you have any applications, which is server style applications, so traditional applications they expect to be running in an operating system, waiting for incoming connections, then again, EC2 is a perfect service for this.
And it's perfect for any applications or services that need burst requirements or steady state requirements.
There are different types of EC2 instances, which are suitable for low levels of normal loads with occasional bursts, as well as steady state load.
So again, if your application needs an operating system, and it's not bursty needs or consistent steady state load, then EC2 should be the first thing that you review.
EC2 is also great for monolithic application stack.
So if your monolithic application requires certain components, a stack, maybe a database, maybe some middleware, maybe other runtime based components, and especially if it needs to be running on a traditional operating system, EC2 should be the first thing that you look at.
And EC2 is also ideally suited for migrating application workloads, so application workloads, which expect a traditional virtual machine or server style environment, or if you're performing disaster recovery.
So if you have existing traditional systems which run on virtual servers, and you want to provision a disaster recovery environment, then EC2 is perfect for that.
In general, EC2 tends to be the default compute service within AWS.
There are lots of niche requirements that you might have.
And if you do have those, there are other compute services such as the elastic container service or Lambda.
But generally, if you've got traditional style workloads, or you're looking for something that's consistent, or if it requires an operating system, or if it's monolithic, or if you migrated into AWS, then EC2 is a great default first option.
Now in this section of the course, I'm covering the basic architectural components of EC2.
So I'm gonna be introducing the basics and let you get some exposure to it, and I'm gonna be teaching you all the things that you'll need for the exam.
-
-
docdrop.org docdrop.org
-
Lost really has two disparate meanings. Losing things is about the familiar falling away, getting lost is about the unfamiliar appearing.
I like this statement; I never thought of it this way. That losing and getting lost are very different. Losing is mostly negative, but getting lost could result in great outcomes.
-
to be lost is to be fully present, and to be fully present is to be capable of being in uncertainty and mystery.
"Lost" is not necessarily negative. Being lost makes you fully present because you are trying to find a way from the lost state. It becomes a part of discovery.
-
-
dev.omeka.org dev.omeka.org
-
News
Fundamentally, the content space is just an HTML block (or multiple), yes?
-
-
dev.omeka.org dev.omeka.org
-
Places
Yale ppl: Do we want any suggestions pre-populated? Omeka: Can this page be brought up with the map pre-loaded to a particular place?
-
-
dev.omeka.org dev.omeka.org
-
About
Ignorant Q: Is this literally all the structure of the page, or are you assuming we will add any other sections we want?
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this first lesson of the EC2 section of the course, I want to cover the basics of virtualization as briefly as possible.
EC2 provides virtualization as a service.
It's an infrastructure as a service or I/O product.
To understand all the value it provides and why some of the features work the way that they do, understanding the fundamentals of virtualization is essential.
So that's what this lesson aims to do.
Now, I want to be super clear about one thing.
This is an introduction level lesson.
There's a lot more to virtualization than I can talk about in this brief lesson.
This lesson is just enough to get you started, but I will include a lot of links in the lesson description if you want to learn more.
So let's get started.
We do have a fair amount of theory to get through, but I promise when it comes to understanding how EC2 actually works, this lesson will be really beneficial.
Virtualization is the process of running more than one operating system on a piece of physical hardware, a server.
Before virtualization, the architecture looked something like this.
A server had a collection of physical resources, so CPU and memory, network cards and maybe other logical devices such as storage.
And on top of this runs a special piece of software known as an operating system.
That operating system runs with a special level of access to the hardware.
It runs in privilege mode, or more specifically, a small part of the operating system runs in privilege mode, known as the kernel.
The kernel is the only part of the operating system, the only piece of software on the server that's able to directly interact with the hardware.
Some of the operating system doesn't need this privilege level of access, but some of it does.
Now, the operating system can allow other software to run such as applications, but these run in user mode or unprivileged mode.
They cannot directly interact with the hardware, they have to go through the operating system.
So if Bob or Julie are attempting to do something with an application, which needs to use the system hardware, that application needs to go through the operating system.
It needs to make a system call.
If anything but the operating system attempts to make a privileged call, so tries to interact with the hardware directly, the system will detect it and cause a system-wide error, generally crashing the whole system or at minimum the application.
This is how it works without virtualization.
Virtualization is how this is changed into this.
A single piece of hardware running multiple operating systems.
Each operating system is separate, each runs its own applications.
But there's a problem, CPU at least at this point in time, could only have one thing running as privileged.
A privileged process member has direct access to the hardware.
And all of these operating systems, if they're running in their unmodified state, they expect to be running on their own in a privileged state.
They contain privileged instructions.
And so trying to run three or four or more different operating systems in this way will cause system crashes.
Virtualization was created as a solution to this problem, allowing multiple different privileged applications to run on the same hardware.
But initially, virtualization was really inefficient, because the hardware wasn't aware of it.
Virtualization had to be done in software, and it was done in one of two ways.
The first type was known as emulated virtualization or software virtualization.
With this method, a host operating system still ran on the hardware and included additional capability known as a hypervisor.
The software ran in privileged mode, and so it had full access to the hardware on the host server.
Now, around the multiple other operating systems, which we'll now refer to as guest operating systems, were wrapped a container of sorts called a virtual machine.
Each virtual machine was an unmodified operating system, such as Windows or Linux, with a virtual allocation of resources such as CPU, memory and local disk space.
Virtual machines also had devices mapped into them, such as network cards, graphics cards and other local devices such as storage.
The guest operating systems believed these to be real.
They had drivers installed, just like physical devices, but they weren't real hardware.
They were all emulated, fake information provided by the hypervisor to make the guest operating systems believe that they were real.
The crucial thing to understand about emulator virtualization is that the guest operating systems still believe that they were running on real hardware, and so they still attempt to make privileged calls.
They tried to take control of the CPU, they tried to directly read and write to what they think of as their memory and their disk, which are actually not real, they're just areas of physical memory and disk that have been allocated to them by the hypervisor.
Without special arrangements, the system would at best crash, and at worst, all of the guests would be overriding each other's memory and disk areas.
So the hypervisor, it performs a process known as binary translation.
Any privileged operations which the guests attempt to make, they're intercepted and translated on the fly in software by the hypervisor.
Now, the binary translation in software is the key part of this.
It means that the guest operating systems need no modification, but it's really, really slow.
It can actually halve the speed of the guest operating systems or even worse.
Emulated virtualization was a cool set of features for its time, but it never achieved widespread adoption for demanding workloads because of this performance penalty.
But there was another way that virtualization was initially handled, and this is called para-virtualization.
With para-virtualization, the guest operating systems are still running in the same virtual machine containers with virtual resources allocated to them, but instead of the slow binary translation which is done by the hypervisor, another approach is used.
Para-virtualization only works on a small subset of operating systems, operating systems which can be modified.
Because with para-virtualization, there are areas of the guest operating systems which attempt to make privileged calls, and these are modified.
They're modified to make them user calls, but instead of directly calling on the hardware, they're calls to the hypervisor called hypercalls.
So areas of the operating systems which would traditionally make privileged calls directly to the hardware, they're actually modified.
So the source code of the operating system is modified to call the hypervisor rather than the hardware.
So the operating systems now need to be modified specifically for the particular hypervisor that's in use.
It's no longer just generic virtualization, the operating systems are modified for the particular vendor performing this para-virtualization.
By modifying the operating system this way, and using para-virtual drivers in the operating system for network cards and storage, it means that the operating system became almost virtualization aware, and this massively improved performance.
But it was still a set of software processors designed to trick the operating system and/or the hardware into believing that nothing had changed.
The major improvement in virtualization came when the physical hardware started to become virtualization aware.
This allows for hardware virtualization, also known as hardware assisted virtualization.
With hardware assisted virtualization, hardware itself has become virtualization aware.
The CPU contains specific instructions and capabilities so that the hypervisor can directly control and configure this support, so the CPU itself is aware that it's performing virtualization.
Essentially, the CPU knows that virtualization exists.
What this means is that when guest operating systems attempt to run any privileged instructions, they're trapped by the CPU, which knows to expect them from these guest operating systems, so the system as a whole doesn't halt.
But these instructions can't be executed as is because the guest operating system still thinks that it's running directly on the hardware, and so they're redirected to the hypervisor by the hardware.
The hypervisor handles how these are executed.
And this means very little performance degradation over running the operating system directly on the hardware.
The problem, though, is while this method does help a lot, what actually matters about a virtual machine tends to be the input/output operation, so network transfer and disk I/O.
The virtual machines, they have what they think is physical hardware, for example, a network card.
But these cards are just logical devices using a driver, which actually connect back to a single physical piece of hardware which sits in the host.
The hardware, everything is running on.
Unless you have a physical network card per virtual machine, there's always going to be some level of software getting in the way, and when you're performing highly transactional activities such as network I/O or disk I/O, this really impacts performance, and it consumes a lot of CPU cycles on the host.
The final iteration that I want to talk about is where the hardware devices themselves become virtualization aware, such as network cards.
This process is called S-R-I-O-V, single root I/O virtualization.
Now, I could talk about this process for hours about exactly what it does and how it works, because it's a very complex and feature-rich set of standards.
But at a very high level, it allows a network card or any other add-on card to present itself, not just one single card, but almost a several mini-cards.
Because this is supported in hardware, these are fully unique cards, as far as the hardware is concerned, and these are directly presented to the guest operating system as real cards dedicated for its use.
And this means no translation has to happen by the hypervisor.
The guest operating system can directly use its card whenever it wants.
Now, the physical card which supports S-R-I-O-V, it handles this process end-to-end.
It makes sure that when the guest operating system is used, there are logical mini-network cards that they have physical access to the physical network connection when required.
In EC2, this feature is called enhanced networking, and it means that the network performance is massively improved.
It means faster speeds.
It means lower latency.
And more importantly, it means consistent lower latency, even at high loads.
It means less CPU usage for the host CPU, even when all of the guest operating systems are consuming high amounts of consistent I/O.
Many of the features that you'll see EC2 using are actually based on AWS implementing some of the more advanced virtualization techniques that have been developed across the industry.
AWS do have their own hypervisor stack now called Nitro, and I'll be talking about that in much more detail in an upcoming lesson, because that's what enables a lot of the higher-end EC2 features.
But that's all the theory I wanted to cover.
I just wanted to introduce virtualization at a high level and get you to the point where you understand what S-R-I-O-V is, because S-R-I-O-V is used for enhanced networking right now, but it's also a feature that can be used outside of just network cards.
It can help hardware manufacturers design cards, which, whilst they're a physical single card, can be split up into logical cards that can be presented to guest operating systems.
It essentially makes any hardware virtualization aware, and any of the advanced EC2 features that you'll come across within this course will be taking advantage of S-R-I-O-V.
At this point, though, we've completed all of the theory I wanted to cover, so go ahead, complete the slicing when you're ready.
You can join me in the next.
-
-
www.pb.uillinois.edu www.pb.uillinois.edu
-
Indiana University has School of Medicine located in Purdue University West Lafayette campus.
I wonder what's going to happen with this school of medicine with the dissolution of IUPUI.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important study provides new and nuanced insights into the evolution of morphs in a textbook example of Batesian mimicry. The evidence supporting the claims about the origin and dominance relationships among morphs is convincing, but the interpretation of signals needs improvement with complementary analysis and some nuanced interpretation. Pending a revision, this work will be of interest to a broad range of evolutionary biologists.
-
Reviewer #1 (Public review):
In this study, Deshmukh et al. provide an elegant illustration of Haldane's sieve, the population genetics concept stating that novel advantageous alleles are more likely to fix if dominant because dominant alleles are more readily exposed to selection. To achieve this, the authors rely on a uniquely suited study system, the female-polymorphic butterfly Papilio polytes.
Deshmukh et al. first reconstruct the chronology of allele evolution in the P. polytes species group, clearly establishing the non-mimetic cyrus allele as ancestral, followed by the origin of the mimetic allele polytes/theseus, via a previously characterized inversion of the dsx locus, and most recently, the origin of the romulus allele in the P. polytes lineage, after its split from P. javanus. The authors then examine the two crucial predictions of Haldane's sieve, using the three alleles of P. polytes (cyrus, polytes, and romulus). First, they report with compelling evidence that these alleles are sequentially dominant, or put in other words, novel adaptive alleles either are or quickly become dominant upon their origin. Second, the authors find a robust signature of positive selection at the dsx locus, across all five species that share the polytes allele.
In addition to exquisitely exemplifying Haldane's sieve, this study characterizes the genetic differences (or lack thereof) between mimetic alleles at the dsx locus. Remarkably, the polytes and romulus alleles are profoundly differentiated, despite their short divergence time (< 0.5 my), whereas the polytes and theseus alleles are indistinguishable across both coding and intronic sequences of dsx. Finally, the study reports incidental evidence of exon swaps between the polytes and romulus alleles. These exon swaps caused intermediate colour patterns and suggest that (rare) recombination might be a mechanism by which novel morphs evolve.
This study advances our understanding of the evolution of the mimicry polymorphism in Papilio butterflies. This is an important contribution to a system already at the forefront of research on the genetic and developmental basis of sex-specific phenotypic morphs, which are common in insects. More generally, the findings of this study have important implications for how we think about the molecular dynamics of adaptation. In particular, I found that finding extensive genetic divergence between the polytes and romulus alleles is striking, and it challenges the way I used to think about the evolution of this and other otherwise conserved developmental genes. I think that this study is also a great resource for teaching evolution. By linking classic population genetic theory to modern genomic methods, while using visually appealing traits (colour patterns), this study provides a simple yet compelling example to bring to a classroom.
In general, I think that the conclusions of the study, in terms of the evolutionary history of the locus, the dominance relationships between P. polytes alleles, and the inference of a selective sweep in spite of contemporary balancing selection, are strongly supported; the data set is impressive and the analyses are all rigorous. I nonetheless think that there are a few ways in which the current presentation of these data could lead to confusion, and should be clarified and potentially also expanded.
(1) The study is presented as addressing a paradox related to the evolution of phenotypic novelty in "highly constrained genetic architectures". If I understand correctly, these constraints are assumed to arise because the dsx inversion acts as a barrier to recombination. I agree that recombination in the mimicry locus is reduced and that recombination can be a source of phenotypic novelty. However, I'm not convinced that the presence of a structural variant necessarily constrains the potential evolution of novel discrete phenotypes. Instead, I'm having a hard time coming up with examples of discrete phenotypic polymorphisms that do not involve structural variants. If there is a paradox here, I think it should be more clearly justified, including an explanation of what a constrained genetic architecture means. I also think that the Discussion would be the place to return to this supposed paradox, and tell us exactly how the observations of exon swaps and the genetic characterization of the different mimicry alleles help resolve it.
(2) While Haldane's sieve is clearly demonstrated in the P. polytes lineage (with cyrus, polytes, and romulus alleles), there is another allele trio (cyrus, polytes, and theseus) for which Haldane's sieve could also be expected. However, the chronological order in which polytes and theseus evolved remains unresolved, precluding a similar investigation of sequential dominance. Likewise, the locus that differentiates polytes from theseus is unknown, so it's not currently feasible to identify a signature of positive selection shared by P. javanus and P. alphenor at this locus. I, therefore, think that it is premature to conclude that the evolution of these mimicry polymorphisms generally follows Haldane's sieve; of two allele trios, only one currently shows the expected pattern.
-
Reviewer #2 (Public review):
Summary:
Deshmukh and colleagues studied the evolution of mimetic morphs in the Papilio polytes species group. They investigate the timing of origin of haplotypes associated with different morphs, their dominance relationships, associations with different isoform expressions, and evidence for selection and recombination in the sequence data. P. polytes is a textbook example of a Batesian mimic, and this study provides important nuanced insights into its evolution, and will therefore be relevant to many evolutionary biologists. I find the results regarding dominance and the sequence of events generally convincing, but I have some concerns about the motivation and interpretation of some other analyses, particularly the tests for selection.
Strengths:
This study uses widespread sampling, large sample sizes from crossing experiments, and a wide range of data sources.
Weaknesses:
(1) Purpose and premise of selective sweep analysis
A major narrative of the paper is that new mimetic alleles have arisen and spread to high frequency, and their dominance over the pre-existing alleles is consistent with Haldane's sieve. It would therefore make sense to test for selective sweep signatures within each morph (and its corresponding dsx haplotype), rather than at the species level. This would allow a test of the prediction that those morphs that arose most recently would have the strongest sweep signatures.
Sweep signatures erode over time - see Figure 2 of Moest et al. 2020 (https://doi.org/10.1371/journal.pbio.3000597), and it is unclear whether we expect the signatures of the original sweeps of these haplotypes to still be detectable at all. Moest et al show that sweep signatures are completely eroded by 1N generations after the event, and probably not detectable much sooner than that, so assuming effective population sizes of these species of a few million, at what time scale can we expect to detect sweeps? If these putative sweeps are in fact more recent than the origin of the different morphs, perhaps they would more likely be associated with the refinement of mimicry, but not necessarily providing evidence for or against a Haldane's sieve process in the origin of the morphs.
(2) Selective sweep methods
A tool called RAiSD was used to detect signatures of selective sweeps, but this manuscript does not describe what signatures this tool considers (reduced diversity, skewed frequency spectrum, increased LD, all of the above?). Given the comment above, would this tool be sensitive to incomplete sweeps that affect only one morph in a species-level dataset? It is also not clear how RAiSD could identify signatures of selective sweeps at individual SNPs (line 206). Sweeps occur over tracts of the genome and it is often difficult to associate a sweep with a single gene.
(3) Episodic diversification
Very little information is provided about the Branch-site Unrestricted Statistical Test for Episodic Diversification (BUSTED) and Mixed Effects Model of Evolution (MEME), and what hypothesis the authors were testing by applying these methods. Although it is not mentioned in the manuscript, a quick search reveals that these are methods to study codon evolution along branches of a phylogeny. Without this information, it is difficult to understand the motivation for this analysis.
(4) GWAS for form romulus
The authors argue that the lack of SNP associations within dsx for form romulus is caused by poor read mapping in the inverted region itself (line 125). If this is true, we would expect strong association in the regions immediately outside the inversion. From Figure S3, there are four discrete peaks of association, and the location of dsx and the inversion are not indicated, so it is difficult to understand the authors' interpretation in light of this figure.
(5) Form theseus
Since there appears to be only one sequence available for form theseus (actually it is said to be "P. javanus f. polytes/theseus"), is it reasonable to conclude that "the dsx coding sequence of f. theseus was identical to that of f. polytes in both P. javanus and P. alphenor" (Line 151)? Looking at the Clarke and Sheppard (1972) paper cited in the statement that "f. polytes and f. theseus show equal dominance" (line 153), it seems to me that their definition of theseus is quite different from that here. Without addressing this discrepancy, the results are difficult to interpret.
-
Author Response:
Reviewer #1 (Public review):
In this study, Deshmukh et al. provide an elegant illustration of Haldane's sieve, the population genetics concept stating that novel advantageous alleles are more likely to fix if dominant because dominant alleles are more readily exposed to selection. To achieve this, the authors rely on a uniquely suited study system, the female-polymorphic butterfly Papilio polytes.
Deshmukh et al. first reconstruct the chronology of allele evolution in the P. polytes species group, clearly establishing the non-mimetic cyrus allele as ancestral, followed by the origin of the mimetic allele polytes/theseus, via a previously characterized inversion of the dsx locus, and most recently, the origin of the romulus allele in the P. polytes lineage, after its split from P. javanus. The authors then examine the two crucial predictions of Haldane's sieve, using the three alleles of P. polytes (cyrus, polytes, and romulus). First, they report with compelling evidence that these alleles are sequentially dominant, or put in other words, novel adaptive alleles either are or quickly become dominant upon their origin. Second, the authors find a robust signature of positive selection at the dsx locus, across all five species that share the polytes allele.
In addition to exquisitely exemplifying Haldane's sieve, this study characterizes the genetic differences (or lack thereof) between mimetic alleles at the dsx locus. Remarkably, the polytes and romulus alleles are profoundly differentiated, despite their short divergence time (< 0.5 my), whereas the polytes and theseus alleles are indistinguishable across both coding and intronic sequences of dsx. Finally, the study reports incidental evidence of exon swaps between the polytes and romulus alleles. These exon swaps caused intermediate colour patterns and suggest that (rare) recombination might be a mechanism by which novel morphs evolve.
This study advances our understanding of the evolution of the mimicry polymorphism in Papilio butterflies. This is an important contribution to a system already at the forefront of research on the genetic and developmental basis of sex-specific phenotypic morphs, which are common in insects. More generally, the findings of this study have important implications for how we think about the molecular dynamics of adaptation. In particular, I found that finding extensive genetic divergence between the polytes and romulus alleles is striking, and it challenges the way I used to think about the evolution of this and other otherwise conserved developmental genes. I think that this study is also a great resource for teaching evolution. By linking classic population genetic theory to modern genomic methods, while using visually appealing traits (colour patterns), this study provides a simple yet compelling example to bring to a classroom.
In general, I think that the conclusions of the study, in terms of the evolutionary history of the locus, the dominance relationships between P. polytes alleles, and the inference of a selective sweep in spite of contemporary balancing selection, are strongly supported; the data set is impressive and the analyses are all rigorous. I nonetheless think that there are a few ways in which the current presentation of these data could lead to confusion, and should be clarified and potentially also expanded.
We thank the reviewer for the kind and encouraging assessment of our work.
(1) The study is presented as addressing a paradox related to the evolution of phenotypic novelty in "highly constrained genetic architectures". If I understand correctly, these constraints are assumed to arise because the dsx inversion acts as a barrier to recombination. I agree that recombination in the mimicry locus is reduced and that recombination can be a source of phenotypic novelty. However, I'm not convinced that the presence of a structural variant necessarily constrains the potential evolution of novel discrete phenotypes. Instead, I'm having a hard time coming up with examples of discrete phenotypic polymorphisms that do not involve structural variants. If there is a paradox here, I think it should be more clearly justified, including an explanation of what a constrained genetic architecture means. I also think that the Discussion would be the place to return to this supposed paradox, and tell us exactly how the observations of exon swaps and the genetic characterization of the different mimicry alleles help resolve it.
The paradox that we refer to here is essentially the contrast of evolving new adaptive traits which are genetically regulated, while maintaining the existing adaptive trait(s) at its fitness peak. While one of the mechanisms to achieve this could be differential structural rearrangement at the chromosomal level, it could arise due to alternative alleles or splice variants of a key gene (caste determination in Cardiocondyla ants), and differential regulation of expression (the spatial regulation of melanization in Nymphalid butterflies by ivory lncRNA). In each of these cases, a new mutation would have to give rise to a new phenotype without diluting the existing adaptive traits when it arises. We focused on structural variants, because that was the case in our study system, however, the point we were making referred to evolution of novel traits in general. We will add a section in the revised discussion to address this.
(2) While Haldane's sieve is clearly demonstrated in the P. polytes lineage (with cyrus, polytes, and romulus alleles), there is another allele trio (cyrus, polytes, and theseus) for which Haldane's sieve could also be expected. However, the chronological order in which polytes and theseus evolved remains unresolved, precluding a similar investigation of sequential dominance. Likewise, the locus that differentiates polytes from theseus is unknown, so it's not currently feasible to identify a signature of positive selection shared by P. javanus and P. alphenor at this locus. I, therefore, think that it is premature to conclude that the evolution of these mimicry polymorphisms generally follows Haldane's sieve; of two allele trios, only one currently shows the expected pattern.
We agree with the reviewer that the genetic basis of f. theseus requires further investigation. f. theseus occupies the same level on the dominance hierarchy of dsx alleles as f. polytes (Clarke and Sheppard, 1972) and the allelic variant of dsx present in both these female forms is identical, so there exists just one trio of alleles of dsx. Based on this evidence, we cannot comment on the origin of forms theseus and polytes. They could have arisen at the same time or sequentially. Since our paper is largely focused on the sequential evolution of dsx alleles through Haldane’s sieve, we have included f. theseus in our conclusions. We think that it fits into the framework of Haldane’s sieve due to its genetic dominance over the non-mimetic female form. However, this aspect needs to be explored further in a more specific study focusing on the characterization, origin, and developmental genetics of f. theseus in the future.
Reviewer #2 (Public review):
Summary:
Deshmukh and colleagues studied the evolution of mimetic morphs in the Papilio polytes species group. They investigate the timing of origin of haplotypes associated with different morphs, their dominance relationships, associations with different isoform expressions, and evidence for selection and recombination in the sequence data. P. polytes is a textbook example of a Batesian mimic, and this study provides important nuanced insights into its evolution, and will therefore be relevant to many evolutionary biologists. I find the results regarding dominance and the sequence of events generally convincing, but I have some concerns about the motivation and interpretation of some other analyses, particularly the tests for selection.
We thank the reviewer for these insightful remarks.
Strengths:
This study uses widespread sampling, large sample sizes from crossing experiments, and a wide range of data sources.
We appreciate this point. This strength has indeed helped us illuminate the evolutionary dynamics of this classic example of balanced polymorphism.
Weaknesses:
(1) Purpose and premise of selective sweep analysis
A major narrative of the paper is that new mimetic alleles have arisen and spread to high frequency, and their dominance over the pre-existing alleles is consistent with Haldane's sieve. It would therefore make sense to test for selective sweep signatures within each morph (and its corresponding dsx haplotype), rather than at the species level. This would allow a test of the prediction that those morphs that arose most recently would have the strongest sweep signatures.
Sweep signatures erode over time - see Figure 2 of Moest et al. 2020 (https://doi.org/10.1371/journal.pbio.3000597), and it is unclear whether we expect the signatures of the original sweeps of these haplotypes to still be detectable at all. Moest et al show that sweep signatures are completely eroded by 1N generations after the event, and probably not detectable much sooner than that, so assuming effective population sizes of these species of a few million, at what time scale can we expect to detect sweeps? If these putative sweeps are in fact more recent than the origin of the different morphs, perhaps they would more likely be associated with the refinement of mimicry, but not necessarily providing evidence for or against a Haldane's sieve process in the origin of the morphs.
Our original plan was to perform signatures of sweeps on individual morphs, but we have very small sample sizes for individual morphs in some species, which made it difficult to perform the analysis. We agree that signatures of selective sweeps cannot give us an estimate of possible timescales of the sweep. They simply indicate that there may have been a sweep in a certain genomic region. Therefore, with just the data from selective sweeps, we cannot determine whether these occurred with refining of mimicry or the mimetic phenotype itself. We have thus made no interpretations regarding time scales or causal events of the sweep. Additionally, we discuss the results we obtained for individual alleles represent what could have occurred at the point of origin of mimetic resemblance or in the course of perfecting the resemblance, although we cannot differentiate between the two at this point (lines 320 to 333).
(2) Selective sweep methods
A tool called RAiSD was used to detect signatures of selective sweeps, but this manuscript does not describe what signatures this tool considers (reduced diversity, skewed frequency spectrum, increased LD, all of the above?). Given the comment above, would this tool be sensitive to incomplete sweeps that affect only one morph in a species-level dataset? It is also not clear how RAiSD could identify signatures of selective sweeps at individual SNPs (line 206). Sweeps occur over tracts of the genome and it is often difficult to associate a sweep with a single gene.
RAiSD (https://www.nature.com/articles/s42003-018-0085-8) detects selective sweeps using the μ statistic, which is a combined score of SFS, LD, and genetic diversity along a chromosome. The tool is quite sensitive and is able to detect soft sweeps. RAiSD can use a VCF variant file comprising of SNP data as input and uses an SNP-driven sliding window approach to scan the genome for signatures of sweep. Using an SNP file instead of runs of sequences prevents repeated calculations in regions that are sparse in variants, thereby optimizing execution time. Due to the nature of the input we used, the μ statistic was also calculated per site. We then tried to annotate the SNPs based on which genes they occur in and found that all species showing mimicry had atleast one site that showed a signature of sweep contained within the dsx locus.
(3) Episodic diversification
Very little information is provided about the Branch-site Unrestricted Statistical Test for Episodic Diversification (BUSTED) and Mixed Effects Model of Evolution (MEME), and what hypothesis the authors were testing by applying these methods. Although it is not mentioned in the manuscript, a quick search reveals that these are methods to study codon evolution along branches of a phylogeny. Without this information, it is difficult to understand the motivation for this analysis.
We thank you for bringing this to our notice, we will add a few lines in the Methods about the hypothesis we were testing and the motivation behind this analysis. We will additionally cite a previous study from our group which used these and other methods to study the molecular evolution of dsx across insect lineages.
(4) GWAS for form romulus
The authors argue that the lack of SNP associations within dsx for form romulus is caused by poor read mapping in the inverted region itself (line 125). If this is true, we would expect strong association in the regions immediately outside the inversion. From Figure S3, there are four discrete peaks of association, and the location of dsx and the inversion are not indicated, so it is difficult to understand the authors' interpretation in light of this figure.
We indeed observe the regions flanking dsx showing the highest association in our GWAS. This is a bit tricky to demonstrate in the figure as the genome is not assembled at the chromosome level. However, the association peaks occur on scf 908437033 at positions 2192979, 1181012 and 1352228 (Fig. S3c, Table S3) while dsx is located between 1938098 and 2045969. We will add the position of dsx in the figure legend of the revised manuscript.
(5) Form theseus
Since there appears to be only one sequence available for form theseus (actually it is said to be "P. javanus f. polytes/theseus"), is it reasonable to conclude that "the dsx coding sequence of f. theseus was identical to that of f. polytes in both P. javanus and P. alphenor" (Line 151)? Looking at the Clarke and Sheppard (1972) paper cited in the statement that "f. polytes and f. theseus show equal dominance" (line 153), it seems to me that their definition of theseus is quite different from that here. Without addressing this discrepancy, the results are difficult to interpret.
Among P. javanus individuals sampled by us, we obtained just one individual with f. theseus and the H P allele, however, in the data we added from a previously published study (Zhang et. al. 2017), we were able to add nine more individuals of this form (Fig. S4b and S7), while we did not show these individuals in Fig 3 (which was based on PCR amplification and sequencing of individual exons od dsx), all the analysis with sequence data was performed on 10 theseus individuals in total. In Zhang et. al. the authors observed what we now know are species specific differences when comparing theseus and polytes dsx alleles and not allele-specific differences. Our observations were consistent with these findings.
-
-
toribix.bergbuilds.domains toribix.bergbuilds.domains
-
This contrasts with today’s relationships which often rely on face-to-face interaction or at least seeing a picture of them before forming a relationship.
I'd like to comment on this. I would disagree with what you said about today's relationships relying on face to face interaction. I think that connections forming through letters is very similar to how connections can form through social media and texting today. However, I would agree with the picture part. It's amazing how the woman was able to fall fully in love with the man without knowing what he looked like. That is something I cannot imagine happening in today's world.
-
-
trailhead.salesforce.com trailhead.salesforce.com
-
DataPacks API
Is this an alternative to the
Metadata API
? -
The automation server uses IDX Build Tool and the SFDX-CLI (Salesforce Command Line Interface) for automated deployment
Why both?
-
-
mlpp.pressbooks.pub mlpp.pressbooks.pub
-
Smith won handily in the nation’s largest cities
Was this because there was a higher worker population? He favored the protection of worker's.
-
Harding took vacation in the summer of 1923, announcing he intended to think deeply about how to deal with his “God-damned friends”.
This is so funny. He knew his friends sucked. Why did he think they would do any differently and not embarrass him?
-
tores and homes were looted and set on fire. When Tulsa firefighters arrived, they were turned away by white vigilantes
This awful. The people in Tulsa were truly thriving and in literally a day, everything was gone.
-
On May 21, 1927, Lindbergh concluded the first ever nonstop solo flight from New York to Paris. Armed with only a few sandwiches, bottles of water, paper maps, and a flashlight, Lindbergh successfully navigated over the Atlantic Ocean in thirty-three hours.
This was an amazing accomplishment and I think he helped restore faith and hope in many Americans.
-
Pickford and other female stars popularized the image of the “flapper,” an independent woman who favored short skirts, makeup, and cigarettes.
This was a good way for women to get more independence and change society's rules/views on women.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So focusing specifically on the animals for life scenario.
So what we're going to do in the upcoming demo lesson, to implement a truly resilient architecture for net services in a VPC, you need a net gateway in a public subnet inside each availability zone that the VPC uses.
So just like on the diagram that you've gone through now.
And then as a minimum, you need private route tables in each availability zone.
In this example, AZA, AZB, and then AZC.
Each of these would need to have their own route table, which would have a default IP version for route, which points at the net gateway in the same availability zone.
That way, if any availability zone fails, the others could continue operating without issues.
Now, this is important.
I've seen it in a few of some questions.
Where it suggests that one net gateway is enough, that a net gateway is truly regionally resilient.
This is false.
A net gateway is highly available in the availability zone that it's in.
So if hardware fails or it needs to scale to cope with load, it can do so in that AZ.
But if the whole AZ fails, there is no failover.
You provision a net gateway into a specific availability zone, not the region.
It's not like the internet gateway, which by default is region resilient.
For a net gateway, you have to deploy one into each AZ that you use if you need that region resilience.
Now, my apologies in advance for the small text.
It's far easier to have this all on screen at once.
I mentioned at the start of the lesson that net used to be provided by net instances, and these are just for the net process running on an EC2 instance.
Now, I don't expect this to feature on the exam at this point.
But if you ever need to use a net instance, by default, EC2 filters all traffic that it sends or receives.
It essentially drops any data that is on its network card when that network card is not either the source or the destination.
So if an instance is running as a net instance, then it will be receiving some data which the source address will be of other resources in that VPC.
And the destination will be a host on the internet.
So it will neither be the source nor the destination.
So by default, that traffic will be dropped.
And if you need to allow an EC2 instance to function as a net instance, then you need to disable a feature called source and destination checks.
This can be disabled via the console UI, the CLI, or the API.
The only reason I mention this is I have seen this question in the exam before, and if you do implement this in a real-world production-style scenario, you need to be aware that this feature exists.
I don't want you wasting your time trying to diagnose this feature.
So if you just right-click on an instance in the console, you'll be able to see an option to disable source and destination checks.
And that is required if you want to use an EC2 instance as a net instance.
Now, at the highest level, architecturally, net instances and net dayways are kind of the same.
They both need a public ID address.
They both need to run in a public subnet, and they both need a functional internet gateway.
But at this point, it's not really preferred to use EC2 running as a net instance.
It's much easier to use a net gateway, and it's recommended by AWS in most situations.
But there are a few key scenarios where you might want to consider using an EC2-based net instance.
So let's just step through some of the criteria that you might be looking at when deploying net services.
If you value availability, bandwidth, low levels of maintenance, and high performance, then you should use net gateways.
That goes for both real-world production usage, as well as being default for answering any exam questions.
A net gateway offers high-end performance, its scales, its custom design, perform network address translation.
A net instance in comparison is limited by the capabilities of the instances running on, and that instance is also general purpose, so it won't offer the same level of custom design performance as a net gateway.
Now, availability is another important consideration, and that instance is a single EC2 instance running inside an availability zone.
It will fail if the EC2 hardware fails.
It will fail if its storage fails or if its network fails, and it will fail if the AZ itself fails entirely.
A net gateway has some benefits over a net instance.
So inside one availability zone, it's highly available, so it can automatically recover, it can automatically scale.
So it removes almost all of the risks of outage versus a net instance.
But it will still fail entirely if the AZ fails entirely.
You still need provision, multiple net gateways, spread across all the AZs that you intend to use, if you want to ensure complete availability.
For maximum availability, a net gateway in every AZ you use.
This is critical to remember for the exam.
Now, if cost is your primary choice, if you're a financially challenged business, or if the VPC that you're deploying net services into is just a test VPC or something that's incredibly low volume, then a net instance can be cheaper.
It can also be significantly cheaper at high volumes of data.
You've got a couple of options.
You can use a very small EC2 instance, even ones that are free tier eligible to reduce costs, and the instances can also be fixed in size, meaning they offer predictable costs.
A net gateway will scale automatically, and you'll build for both the net gateway and the amount of data transferred, which increases as the gateway scales.
A net gateway is also not free tier eligible.
Now, this is really important because when we deploy these in the next demo lesson, it's one of those services that I need to warn you will come at a cost, so you need to be aware of that fact.
You will be charged for a net gateway regardless of how small the usage.
Net instances also offer other niche advantages because they're just EC2 instances.
You can connect to them just like you would any other EC2 instance.
You can multi-purpose them so you can use them for other things, such as passing hosts.
You can also use them for port forwarding, so you can have the port on the instance externally that could be connected to over the public internet, and have this forwarded-on for an instance inside the VPC.
Maybe port 8 if a web, or port 443 for secure web.
You can be completely flexible when you use net instances.
With a net gateway, this isn't possible because you don't have access to manage it.
It's a managed service.
Now, this comes up all the time in the exam, so try and get it really clear in your memory, and that gateway cannot be used as a passing host.
It cannot do port forwarding because you cannot connect to its operating system.
Now, finally, this is again one focus on the exam.
Net instances are just EC2 instances, so you can filter traffic using the network ACLs on the subnet instances in, or security groups directly associated with that instance.
Net gateways don't support security groups.
You can only use knuckles with net gateways.
This one comes up all the time in the exam, so it's worth noting down and maybe making a flashcard with.
Now, a few more things before we finish up.
What about IP version 6?
The focus of net is to allow private IP version 4 addresses to be used to connect in an outgoing only way to the AWS public zone and public internet.
Inside AWS, all IP version 6 addresses are publicly routable, so this means that you do not require net when using IP version 6.
The internet gateway works directly with IP version 6 addresses, so if you choose to make an instance in a private subnet, have a default IP version 6 route to the internet gateway, it will become a public instance.
As long as you don't have any knuckles or any security groups, any IP version 6 IP address in AWS can communicate directly with the AWS public zone and the public internet.
So the internet gateway can work directly with IP version 6.
Net gateways do not work with IP version 6, they're not required and they don't function with IP version 6.
So for the exam, if you see any questions which mention IP version 6 and net gateways, you can exclude the answer.
Net gateways do not work with IP version 6 and you can repeat it because I really wanted to stick in your memory.
So with any subnet inside AWS, which has been configured for IP version 6, if you add the IP version 6 default route, which is colon colon 4 slash 0, if you add that route and you point that route at the internet gateway as a target, that will give that instance bi-directional connectivity to the public internet and it will allow it to reach the AWS public zone and public services.
One service that we'll be talking about later on in the course when I cover more advanced features of VPC is a different type of gateway, known as an egress-only internet gateway.
This is a specific type of internet gateway that works only with IP version 6 and you use it when you want to give an IP version 6 instance outgoing only access to the public internet and the AWS public zone.
So don't worry, we'll be covering that later in the course, but I want to get it really burned into your memory that you do not use net and you do not use net gateways with IP version 6.
It will not work.
Now to get you some experience of using net gateways, it's time for a demo.
In the demo lesson, I'm going to be stepping you through what you need to do to provision a completely resilient net gateway architecture.
So that's using net gateway in each availability zone as well as configuring the routing required to make it work.
It's going to be one of the final pieces to our multi-tier VPC and it will allow private instances to have full outgoing internet access.
Now I can't wait for us to complete this together.
It's going to be a really interesting demo, one that will be really useful if you're doing this in the real world or if you have to answer exam questions related to net or net gateway.
So go ahead, complete the video and when you're ready, join me in the demo.
-
-
docdrop.org docdrop.orgview6
-
We should incorporate into our teaching the assets low-income students bring to school. If poor students' resilience, flexibility, and persistence toward a goal is affirmed and integrated into the school culture, students would not drop out at the rate they do
This recommendation emphasizes the importance of a more inclusive and asset-based approach to the educational process. By recognizing the unique strengths of low-income students as part of a school's culture, educational institutions can not only help these students overcome educational challenges, but also build a more supportive and diverse learning environment for all students. This approach not only helps to reduce dropout rates, but also fosters the holistic development of all students.
-
Teachers can play a major role in helping students feel engaged and con-nected to their learning communities. First, we need to make the invisible visi-ble-to unveil the hidden curriculum. And more important, we need to encourage students and colleagues to question the legitimacy of the hidden curriculum itself. I was a student who would have benefited from strong academic mentoring. I did not know what I did not know. I was subject to an establishment that did not value what I did know: my resiliency, my outspokenness, and my other strengths.
The hidden curriculum may include biases or assumptions that may disadvantage certain groups of students. By encouraging students and colleagues to question the legitimacy of these hidden curricula, a more open and inclusive learning environment can be fostered to ensure that all students' voices are heard and their needs are met.
-
Although I socialized with both Black and White students, I self-identified as "Black." After the name-calling, and after I realized the students who were not compliant and submissive were the ones who were ridiculed, I questioned my friendships with White students.
This passage emphasizes the importance of addressing race and social class in educational settings. Educational institutions need to recognize and address these systemic biases to ensure an equitable and inclusive learning environment for all students. Measures such as increasing diversity training for teachers and administrators and advocating for inclusive policies and practices can help break down such biases and promote a more equitable educational environment.
-
ules. In this way I was raised to be compliant, one element of the hidden curriculum in our schools. This insistence on compliance is also one aspect of schooling that keeps some students from feeling they can challenge the very structures that repress them. They often feel silenced and alienated from public education at an early age. In my household, we did not have many books. I believe my lack of books contributed to my below average reading test scores. In third grade I was read-ing at a second-grade level. Research indicates that social class can influence cognitive abilities because a lack of money results in fewer experiences at muse-ums and traveling, fewer books in the home, and less access to preschool educa-tion (Bowles & Gintis, 2002; Good & Brophy, 1987).
The hidden, conformist curriculum does, in fact, discourage the freedom of students to question and resist repressive systems, creating alienation. Students may feel that their opinions or views aren’t heard, and then leave their education to a silent partner. Moreover, the absence of material like books can also affect academic skills. Inadequate access to home-schooling materials can seriously impact literacy and learning outcomes.
-
The same unease students feel with their more affluent peers can transfer over to their professors. They may not reach out to their professors when they are performing poorly in the class, fearing that they will be judged as lacking in the ability to succeed in schoo
Students of lower economic status may believe that their performance will be viewed as less than competent, and thus be reluctant to communicate with their professors. This communication barrier can prevent them from obtaining the necessary support and guidance, further exacerbating academic performance problems. Such psychological barriers can affect students' long-term educational and career paths
-
students rarely out themselves as being poor. You could not tell they struggle financially by the papers they turn in to me or by what they say when we discuss things in my sociology classes at the University of St. Thomas. During office hours, however, students reveal to me that they grew up poor, and often they tell me that they are the first person from their family to go to college. They talk about the social distance they feel from their peers who have money. They tell me t
Economic inequality in scholarly settings can erect a social distance that’s seldom perceived by those who don’t feel it. Schools have a dark curriculum, which is also outside the classroom and forming social norms and reifying class differences. Inclusion: for low-income students, not being afforded materials or experiences comparable to their wealthier peers creates isolation and reinforces alienation. These students usually create social networks of peers from the same income bracket, which is useful for them coping with such problems but could reduce their opportunities to be able to socialize with more diverse peer groups.
-
-
www.sjsu.edu www.sjsu.edu
-
most clocks were used for astronomical and astrological purposes rather than for telling the time of day
Was this in order to see the time of day rather than the actual time?
-
en seeking knowledge would travel to Spain to obtain Muslim science
Is this due to the fact that the highest form of religous activity is knowledge?
-
earning and gaining knowledge is the highest form of religious activity for Muslims
Why is this the highest form of religous activity compared to others?
-
-
docdrop.org docdrop.org
-
There’s nothing we can’t make into a story. There’s not anything that isn’t already one.
I like the final statement, and I agree that everything can be made into a story if done right and has the potential to be part of a meaningful narrative.
-
Digression, I’ve always thought, gets a bad rap. The word itself implies that there’s a proper gress from which one has strayed, that every life is a line. But surely linearity is something we impose only afterward, when it’s time to make a narrative, when it’s time to comb out our gresses and untangle them into something we can call progress or congress
This is in contrast to the idea that we should think linearly and focus on one thing. Instead, we should allow ourselves to wander in thought and action, as it can lead to unexpected insights, ideas, and understanding.
-
-
docdrop.org docdrop.orgview5
-
Without an adult to encourage her to cake algebra, the gateway to college preparatory math and science courses, or to advise her on where she might seek academic support, Chantelle made a decision that is likely to affect her preparation for college and therefore will have bearing in the long term on her opportunities after high school. By taking prealgebra in the ninth grade, Chantelle is all hut ensured that she will be unable to meet the admissions requirements to the UC or California State University (CSU) systems. Given that so much is at stake, it must be recognized that a system of course assignment that allows students to choose which classes to take will invariably work better for some than others. Jennifer's words are equally revealing. Like many of Berkeley High's more affluent, white ninth graders, she did not attend Berke-ley's public school system. In fact, according to school records, some 12 percent of Berkeley High School's class of 2000 attended private
Chantelle's situation is representative of many students who face similar challenges, and it will take a concerted effort on the part of education policymakers, school administrators, and teachers to ensure that every student is able to make the most favorable decisions academically by providing additional resources and support. This includes strengthening career guidance services and implementing more comprehensive academic support systems in schools.
-
Research has shown that economic capital, that is, the w~alch and income of parents, is one of the primary factors influ-ep.cing student achieveme11t (Coleman and others, 1966; Roth-stein, 2004; Farkas, 2004 ). Student achievement is also influenced _l,y more subtle resources sud; as social capital-the benefits derived from c<;mnections to networks and individuals with power and influence (Coleman, 1988; Stanton-Salazar, 1997, 2001; Noguera, 2003 )-and cultural capital (Bourdieu and Wacquant, 1992)-the t~sces, styles, habits, language, behaviors, appearance, and customs c.hat serve as indicators of status anJ privilege.
This emphasizes the significant role of economic capital for student achievement. Economic capital provides students with good education, opportunities for extracurricular activities and security that could be applied towards improving their learning. Social capital comes into play, too because connecting with influencers or belonging to networks can enable you to get access to opportunities that one may not have had access to otherwise. Involvement can translate into mentorship, mentorship, knowledge and these things can motivate students.
-
As the comments from these two student show, some tudent have more information and a clearer sense of how .Lhe school wurks (such as the classes they need to take) than others. In addition, more affluent students like Jennifer can rely on _the resources of their parents ( private tutors and counselor , the
The “wealthier students” mentioned in the article, such as Jennifer, were able to rely on additional resources provided by her parents, such as private tutors and counselors, which helped her better understand and navigate the school system. This reveals how wealth translates into an educational advantage, providing children from affluent families with additional support and opportunities that are not available to other, less well-off students.
-
BHS). Our exami-nation of school structures also includes a focus on the organization of the school-the decentralized nature of decision making within departments, the distribution of authority and responsibility among administrators, the accountability (or lack thereof) anJ funcrion of special programs (such ::ts English as a Second Language, Ach-anced Placement, and Special Education). We examine how these struc-tures shape and influence the acad
Teacher content, course content, allocation of resources are some of the most important things that affect student learning. These structures shape the opportunities given to students and help or hinder their success. Discrepancy in decisions, in the division of authority and responsibility, have a direct impact on the capacity of a school to cater for its children. As decision-making doesn’t fit well together or accountability is inadequate, program and policy implementation will be skewed and might perpetuate inequities unnecessarily.
-
thers
This paragraphhighlights the significant impact of economic, social, and cultural capital on students' educational experiences and opportunities. It emphasizes how students from low-income backgrounds, like Chantelle, may lack the support and resources needed to make informed choices about their education, leading to long-term consequences for their college readiness and future opportunities. In contrast, more affluent students often have better access to guidance and advanced courses, revealing systemic inequalities in the education system. Overall, it calls attention to the need for equitable support in schools to help all students succeed.
-
-
www.sjsu.edu www.sjsu.edu
-
Old Silk Road
The silk road was a way to trade different imported goods
-
There is not doubt that the Chinese invented gunpowder.
Gunpowder is an important military advancement that has allowed us to create better military strategies.
-
paper is one of the Chinese technologies that we can trace in its transfer to Western Europe.
Paper is an advancement that we use everyday and will continue for the future.
-
-
trailhead.salesforce.com trailhead.salesforce.com
-
The OmniStudio Tracking Service is an event-tracking service that captures details of actions that users perform
All actions? Does it require configuration to be useful?
-
-
docdrop.org docdrop.orgview5
-
Harold's mother is as passionate as Garrett's parents about provid-ing what it takes for her children to be successful and happy, but she sees her role as providing food, "clothing and shelter, teaching the difference between right and wrong, and providing comfort."8
Harold's mother was passionate and committed to the care of her children and did her best to provide for their basic needs and education, but was more strapped for material resources than Garrett's family. This suggests that although parents share the same desire to care for and educate their children, differences in economic conditions make their roles and the support they may provide different.
-
The study first assessed the children shortly after they began kinder-garten, providing a picture of their skills at the starting line of their for-mal schooling. It shows that children from families in the top 20 percent of the income distribution already outscore children from the bottom 20 percent by 106 points in early literacy
The 106-point gap mentioned in the article highlights the importance of early educational intervention. For children from low-income families, the kindergarten years may be a critical time to close the academic gap.
-
The Hart and Risley study is a sobering reminder that it takes more than money to promote young children's development.28 Parents from higher-income families appear to offer their children language advantages that would persist even if their annual incomes rose or fell by $10,000 or even $20,000. Research has shown that maternal education and IQ levels, not family income, are most closely associated with parental use of lan-guage. 29 So while money matters, other family factors do too. Lareau's detailed look at the lives of the children in her study revealed other striking differences between high-and low-income families, includ-ing the degree to which middle-class parents "managed" their children's lives, while working-class and poor parents left children alone to play and otherwise organize their activities.
The Hart and Risley paper certainly contains an important clue to what goes on with children. There is something profound about a child’s upbringing in that money does not have to determine the child’s language growth. This focus on maternal education and IQ also emphasises the need for a linguistically enriching context in which a child can develop his or her mind.
-
study of children who entered kindergarten in the fall of 1998 allow for a more detailed look at income-based gaps as chil-dren progress through school (figure 3.1).1 As before, a 100-point difference in figure 3.1 corresponds to one standard deviation. Each bar shows the relative size of the gap between high-and low-income childre
The math and reading deficits between children of high and low incomes has been increasing sharply in the past 30 years. This expanding inequality is alarming because it illustrates a systemic problem of educational unequality and resources. The fact that the national study you reference provides an examination in greater depth of these differences throughout the school years is interesting. It is clear from this statistic that early intervention should be taken to address socioeconomic inequalities and ensure equitable educational opportunity for all children. This needs to be addressed by both policymakers and teachers to close those disconnects and bring what support can be needed for low-income students.
-
It is easy to imagine how the childhood circumstances of these two young men may have shaped their fates. Alexander lived in the suburbs while Anthony lived in the city center. Most of Alexander's suburban neighbors lived in families with incomes above the $125,000 that now sep-arates the richest 20 percent of children from the rest. Anthony Mears's school served pupils from families whose incomes were near or below the $27,000 threshold separating the bottom 20 percent (see figure 2.4)
This passage emphasizes the significant impact that place of residence has on an individual's growth.Alexander and Anthony's place of residence and their family's economic status foreshadowed the vast differences in their educational and social opportunities. The passage challenges readers to think about how their socioeconomic status shapes their personal opportunities and to reflect on whether this status quo is equitable or sustainable.
-
-
www.sjsu.edu www.sjsu.edu
-
However, the most significant difference between the clock and other machines was in its effect on society.
Societal impacts are very important because society can either like or hate a technological invention.
-
The plow is considered to be one of the most important (and oldest) technologies developed
The plow was a gateway technological invention that allowed farmers to have clean farms.
-
-
web.p.ebscohost.com web.p.ebscohost.com
-
Once her students realize how easily images can be manipulated, it's harder for them to take anything they see on the internet at face value.
i feel like this was already known with social media. Fake news gets passed around easier
-
High school students and educators have very different perspectives on what AI
how can we get on the same page
-
If the computer makes it up, that must be the right answer."
i feel like students have become more educated in ai. some know that the computer can be wrong, so they have to recheck what the ai spit out
-
-
www.aljazeera.com www.aljazeera.com
-
First, it worked in the interest of those in Britian wishing to dismantle the Ottoman Empire and incorporate parts of it into the British Empire. Second, it resonated with those within the British aristocracy, both Jews and Christians, who became enchanted with the idea of Zionism as a panacea for the problem of anti-Semitism in Central and Eastern Europe, which had produced an unwelcome wave of Jewish immigration to Britain.
Ilan Pape on why [[Balfour Declaration]] was welcomed in the Britain.
-
The wider historical context goes back to the mid-19th century, when evangelical Christianity in the West turned the idea of the “return of the Jews” into a religious millennial imperative and advocated the establishment of a Jewish state in Palestine as part of the steps that would lead to the resurrection of the dead, the return of the Messiah, and the end of time.
Ilan Pape on the connectio of evangelical Christians with the Zionists, to speed up Jesus's resurrection.
-
-
hypothes.is hypothes.is
-
me (1 and 2), then activate the sidebar b
...
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public Review):
Summary:
The authors compared four types of hiPSCs and four types of hESCs at the proteome level to elucidate the differences between hiPSCs and hESCs. Semi-quantitative calculations of protein copy numbers revealed increased protein content in iPSCs. Particularly in iPSCs, proteins related to mitochondrial and cytoplasmic were suggested to reflect the state of the original differentiated cells to some extent. However, the most important result of this study is the calculation of the protein copy numbers per cell, and the validity of this result is problematic. In addition, several experiments need to be improved, such as using cells of different genders (iPSC: female, ESC: male) in mitochondrial metabolism experiments.
Strengths:
The focus on the number of copies of proteins is exciting and appreciated if the estimated calculation result is correct and biologically reproducible.
Weaknesses:
The proteome results in this study were likely obtained by simply looking at differences between clones, and the proteome data need to be validated. First, there were only a few clones for comparison, and the gender and number of cells did not match between ESCs and iPSCs. Second, no data show the accuracy of the protein copy number per cell obtained by the proteome data.
We agree with the reviewer that it would be useful to have data from more independent stem cell clones and ideally an equal gender balance of the donors would be preferable. As usual, practical cost-benefit, and time available affect the scope of work that can be performed. We note that the impact of biological donor sex on proteome expression in iPSC lines has already been addressed in previous studies13. We will however revise the manuscript to include specific mention of these limitations and propose a larger-scale follow-up when resources are available.
Regarding the estimation of protein copy numbers in our study, we would like to highlight that the proteome ruler approach we have used has been employed extensively in the field previously, with direct validation of differences in copy numbers provided using orthogonal methods to MS, e.g., FACS2-4,7,10. Furthermore, the original manuscript14 directly compared the copy numbers estimated using the “proteomic ruler” to spike-in protein epitope signature tags and found remarkable concordance. This original study was performed with an older generation mass spectrometer and reduced peptide coverage, compared with the instrumentation used in our present study. Further, we noted that these authors predicted that higher peptide coverage, such as we report in our study, would further increase quantitative performance.
Reviewer #2 (Public Review):
Summary:
Pluripotent stem cells are powerful tools for understanding development, differentiation, and disease modeling. The capacity of stem cells to differentiate into various cell types holds great promise for therapeutic applications. However, ethical concerns restrict the use of human embryonic stem cells (hESCs). Consequently, induced human pluripotent stem cells (ihPSCs) offer an attractive alternative for modeling rare diseases, drug screening, and regenerative medicine. A comprehensive understanding of ihPSCs is crucial to establish their similarities and differences compared to hESCs. This work demonstrates systematic differences in the reprogramming of nuclear and non-nuclear proteomes in ihPSCs.
We thank the reviewer for the positive assessment.
Strengths:
The authors employed quantitative mass spectrometry to compare protein expression differences between independently derived ihPSC and hESC cell lines. Qualitatively, protein expression profiles in ihPSC and hESC were found to be very similar. However, when comparing protein concentration at a cellular level, it became evident that ihPSCs express higher levels of proteins in the cytoplasm, mitochondria, and plasma membrane, while the expression of nuclear proteins is similar between ihPSCs and hESCs. A higher expression of proteins in ihPSCs was verified by an independent approach, and flow cytometry confirmed that ihPSCs had larger cell sizes than hESCs. The differences in protein expression were reflected in functional distinctions. For instance, the higher expression of mitochondrial metabolic enzymes, glutamine transporters, and lipid biosynthesis enzymes in ihPSCs was associated with enhanced mitochondrial potential, increased ability to uptake glutamine, and increased ability to form lipid droplets.
Weaknesses:
While this finding is intriguing and interesting, the study falls short of explaining the mechanistic reasons for the observed quantitative proteome differences. It remains unclear whether the increased expression of proteins in ihPSCs is due to enhanced transcription of the genes encoding this group of proteins or due to other reasons, for example, differences in mRNA translation efficiency. Another unresolved question pertains to how the cell type origin influences ihPSC proteomes. For instance, whether ihPSCs derived from fibroblasts, lymphocytes, and other cell types all exhibit differences in their cell size and increased expression of cytoplasmic and mitochondrial proteins. Analyzing ihPSCs derived from different cell types and by different investigators would be necessary to address these questions.
We agree with the Reviewer that our study does not extend to also providing a detailed mechanistic explanation for the quantitative differences observed between the two stem cell types and did not claim to have done so. We have now included an expanded section in the discussion where we discuss potential causes. However, in our view fully understanding the reasons for this difference is likely to involve extensive future in-depth analysis in additional studies and is not something that can be determined just by one or two additional supplemental experiments.
We also agree studying hiPSCs reprogrammed from different cell types, such as blood lymphocytes, would be of great interest. Again, while we agree it is a useful way forward, in practice this will require a very substantial additional commitment of time and resources. We have now included a section discussing this opportunity within the discussion to encourage further research into the area.
Recommendations for the authors:
Reviewer #1 (Recommendations For The Authors):
(1) aizi1 and ueah1 clones, which were analyzed in Figure 1A, were excluded from the proteome analysis. In particular, the GAPDH expression level of the aizi1 clone is similar to that of ESCs and different from other iPSC clones. An explanation of how the clones were selected for proteome analysis is needed. Previously, the comparative analysis of iPSCs and ESCs reported in many studies from 2009-2017 (Ref#1-7) has already shown that the number of clones used in the comparative analysis is small, claiming differences (Ref#1-3) and that the differences become indistinguishable when the number of clones is increased (Ref#4-7). Certainly, few studies have been done at the proteome level, so it is important to examine what differences exist in the proteome. Also, it is interesting to focus on the amount of protein per cell. However, if the authors want to describe biological differences, it would be better to get the proteome data in biological duplicate and state the reason for selecting the clones used.
(1) M. Chin, Cell Stem Cell, 2009, PMID: 19570518
(2) K. Kim, Nat Biotechnol., 2011, PMID: 22119740
(3) R. Lister, Nature, 2011, PMID: 21289626
(4) A.M. Newman, Cell Stem Cell, 2010, PMID: 20682451
(5) M.G. Guenther, Cell Stem Cell, 2010, PMID: 20682450
(6) C. Bock, Cell, 2010, PMID: 21295703
(7) S. Yamanaka, Cell Stem Cell, PMID: 22704507
We agree with the reviewer that analysing more clones would be beneficial. We have included a section of this topic in the discussion. In our study, we only had access to the 4 hESC lines included, therefore in the original proteomic study we also analysed 4 hiPSC lines, which were routinely grown within our stem cell facility. While as the study progressed the stem cell facility expanded the culture of additional hiPSC lines, unfortunately we couldn’t also access additional hESC lines.
We agree that ideally combining each biological replicate with additional technical replicates would provide extra robustness. As usual, cost and practical considerations at the time the experiments were performed affected the experimental design chosen. For the experimental design, each experiment was contained within 1 batch to avoid the strong batch effects present in TMT (Brenes et al 2019).
(2) iPSC samples used in the proteome analysis are two types of female and two types of male, while ESC samples are three types of female and one type of female. The number of sexes of the cells in the comparative analysis should be matched because sex differences may bias the results.
While we agree with the reviewer in principle, we have previously performed detailed comparisons of proteome expression in many independent iPSC lines from both biological male and female donors (see Brenes et al., Cell Reports 2021) and it seems unlikely that biological sex differences alone could account for the proteome differences between iPS and ESC lines uncovered in this study . However, as this is a relevant point, we have revised the manuscript to explicitly mention this caveat within the discussion section.
(3) In Figure 1h, I suspect that the variation of PCA plots is very similar between ESCs and iPSCs. In particular, the authors wrote "copy numbers for all 8 replicates" in the legend, but if Figure 1b was done 8 times, there should be 8 types of cells x 8 measurements = 64 points. Even if iPSCs and ESCs are grouped together, there should be 8 points for each cell type. Is it possible that there is only one TMT measurement for this analysis? If so, at least technical duplicates or biological duplicates would be necessary. I also think each cell should be plotted in the PCA analysis instead of combining the four types of ESCs and iPSCs into one.
We thank the reviewer for bringing this error to our attention. The legend has been corrected to state, “for all 8 stem cell lines”. Each dot represents the proteome of each of the 4 hESCs and 4 hiPSCs that were analysed using proteomics.
(4) It is necessary to show what functions are enriched in the 4408 proteins whose protein copies per cell were increased in the iPSCs obtained in Figure 2B.
The enrichment analysis requested has been performed and is now included as a new supplemental figure 2. We find it very interesting that despite the large number of proteins involved here (4,408), the enrichment analysis still shows clear enrichment for specific cellular processes. The summary plot using affinity propagation within webgestalt is included here:
Author response image 1.
(5) The Proteomic Ruler method used in this study is a semi-quantitative method to calculate protein copy numbers and is a concentration estimation method. Therefore, if the authors want to have a biological discussion based on the results, they need to show that the estimated concentrations are correct. For example, there are Western Blotting (WB) results for genes with no change in protein levels in hESC and hiPSC in Fig. 6ij, but the WB results for the group of genes that are claimed to have changed are not shown throughout the paper. Also, there is no difference in the total protein level between iPSCs and ESCs from the ponceau staining in Fig.6ij. WB results for at least a few genes are needed to show whether the concentration estimates obtained from the proteome analysis are plausible. If the protein per cell is increased in these iPSC clones, performing WB analysis using an equal number of cells would be better.
Regarding the ‘proteome ruler’ approach we would like to highlight that this method has previously been used extensively in the field, with detailed validation, as already explained above. It is also not ‘semi-quantitative’ and can estimate absolute abundance, as well as concentrations. Our work does not use their concentration formulas, but the estimation of protein copy numbers, which was shown to closely match the observed copy numbers as determined when spike-ins are used14.
In providing here additional validation using Western Blotting (WB), we prioritised for analysis also by WB the proteins related to pluripotency markers, which are vital to determine the pluripotency state of the hESCs and hiPSCs, as well as histone markers. We have included a section in the discussion concerning additional validation data and agree in general that further validation is always useful.
(6) Regarding the experiment shown in Figure 4l, the gender of iPSC used (wibj2) is female and WA01 (H1; WA01) is male. Certainly, there is a difference in the P/E control ratio, but isn't this just a gender difference? The sexes of the cells need to be matched.
We accept that ideally the sexes of donors should ideally have been matched and have mentioned this within the discussion. Nonetheless, as previously mentioned, our previous detailed proteomic analyses of multiple hiPSC lines13 derived from both biological male and female donors provide relevant evidence that the results shown in this study are not simply a reflection of the sex of the donors for the respective iPSC and ESC lines. When comparing eroded and non-eroded female hiPSCs to male hiPSCs we found no significant differences in any electron transport chain proteins, not TCA proteins between males and females.
Minor comments:
(1) Method: Information on the hiPSCs and hESCs used in this study should be described. In particular, the type of differentiated cells, gender, and protocols that were used in the reprogramming are needed.
We agree with the reviewer on this. The hiPSC lines were generated by the HipSci consortium, as described in the flagship HipSci paper15. We cite the flagship paper, which specifies in great detail the reprogramming protocols and quality control measures, including analysis of copy number variations15. However, we agree that this information may not be easily accessible for readers. We agree it is relevant to explicitly include this information in our present manuscript, instead of expecting readers to look at the flagship paper. These details have therefore been added to the revised version.
(2) Method: In Figure1a, Figure 6i, j, the antibody information of Nanog, Oct4, Sox2, and Gapdh is not written in the method and needs to be shown.
The data relating to these has now been included within the methods section.
(3) Method: In Figure 1b and other figures, the authors should indicate which iPSC corresponds to which TMT label; the data in the Supplemental Table also needs to indicate which data is which clone.
We have now added this to the methods section.
(4) Method: The method of the FACS experiment used in Figure 2 should be described.
The methods related to the FACS analysis have now been included within the manuscript.
(5) Method: The cell name used in the mitochondria experiment shown in Figure 4 is listed as WA01, which is thought to be H1. Variations in notation should be corrected.
This has now been corrected.
(6) Method: The name of the cell clone shown in Figure 3l,m should be mentioned.
We have now added these details on the corresponding figure and legend.
Reviewer #2 (Recommendations For The Authors):
This study utilized quantitative mass spectrometry to compare protein expression in independently derived 4 ihPSC and 4 hESC cell lines. The investigation quantified approximately 7,900 proteins, and employing the "Proteome ruler" approach, estimated protein copy numbers per cell. Principal component analyses, based on protein copy number per cell, clearly separated hiPSC and hESC, while different hiPSCs and hESCs grouped together. The study revealed a global increase in the expression of cytoplasmic, mitochondrial, membrane transporters, and secreted proteins in hiPSCs compared to hESCs. Interestingly, standard median-based normalization approaches failed to capture these differences, and the disparities became apparent only when protein copy numbers were adjusted for cell numbers. Increased protein abundance in hiPSC was associated with augmented ribosome biogenesis. Total protein content was >50% higher in hiPSCs compared to hESCs, a observation independently verified by total protein content measurement via the EZQ assay and further supported by the larger cell size of hiPSCs in flow cytometry. However, the cell cycle distribution of hiPSC and hESC was similar, indicating that the difference in protein content was not due to variations in the cell cycle. At the phenotypic level, differences in protein expression also correlated with increased glutamine uptake, enhanced mitochondrial potential, and lipid droplet formation in hiPSCs. ihPSCs also expressed higher levels of extracellular matrix components and growth factors.
Overall, the presented conclusions are adequately supported by the data. Although the mechanistic basis of proteome differences in ihPSC and hESC is not investigated, the work presents interesting findings that are worthy of publication. Below, I have listed my specific questions and comments for the authors.
(1) Figure 1a displays immunoblots from 6 iPSC and 4 ESC cell lines, with 8 cell lines (4 hESC, 4 hiPSC) utilized in proteomic analyses (Fig. 1b). The figure legend should specify the 8 cell lines included in the proteomic analyses. The manuscript text describing these results should explicitly mention the number and names of cell lines used in these assays.
We agree with the reviewer and have now marked in figure 1 all the lines that were used for proteomics and have added a section in the methods specifying which cell lines were analysed in each TMT channel.
(2) In most figures, the quantitative differences in protein expression between hiPSC and hESC are evident, and protein expression is highly consistent among different hiPSCs and hESCs. However, the glutamine uptake capacity of different hiPSC cell lines, and to some extent hESC cell lines, appears highly variable (Figure 3e). While proteome changes were measured in 4 hiPSCs and 4 hESCs, the glutamine uptake assays were performed on a larger number of cell lines. The authors should clarify the number of cell lines used in the glutamine uptake assay, clearly indicating the cell lines used in the proteome measurements. Given the large variation in glutamine uptake among different cell lines, it would be useful to plot the correlation between the expression of glutamine transporters and glutamine uptake in individual cell lines. This may help understand whether differences in glutamine uptake are related to variations in the expression of glutamine transporters.
The “proteomic ruler” has the capacity to estimate the protein copy numbers per cell, as such changes in the absolute number of cells that were analysed do not cause major complications in quantification. Furthermore, TMT-based proteomics is the most precise proteomics methods available, where the same peptides are detected in all samples across the same data points and peaks, as long as the analysis is done within a single batch, as is the case here.
The glutamine uptake assay is much more sensitive to the variation in the number of cells. The number of cells were estimated by plating the cells with approximately 5e4 cells two days before the assay, which creates variability. Furthermore, hESCs and hiPSCs are more adhesive than the cells used in the original protocol, hence the quench data was noisier for these lines, making the data from the assay more variable.
(3) In Figure 4j, it would be helpful to indicate whether the observed differences in the respiration parameters are statistically significant.
We have now modified the plot to show which proteins were significantly different.
(4) The iPSCs used here are generated from human primary skin fibroblasts. Different cells vary in size; for instance, fibroblast cells are generally larger than blood lymphocytes. This raises the question of whether the parent cell origin impacts differences in hiPSCs and hESC proteomes. For example, do the authors anticipate that hiPSCs derived from small somatic cells would also display higher expression of cytoplasmic, mitochondrial, and membrane transporters compared to ESC? The authors may consider discussing this point.
This is a very interesting point. We have now added an extension to the discussion focussed on this subject.
(5) One wonders if the "Proteome ruler" approach could be applied retrospectively to previously published ihPSC and hESC proteome data, confirming higher expression of cytoplasmic and mitochondrial proteins in ihPSCs, which may have been masked in previous analyses due to median-based normalization.
We agree with the reviewer and think this is a very good suggestion. Unfortunately, in the main proteomic papers comparing hESC and hiPSCs16,17 the authors did not upload their raw files to a public repository (as it was not mandatory at that period in time), and they also used the International Protein Index (IPI), which is a discontinued database. So the raw files can’t be reprocessed and the database doesn’t match the modern SwissProt entries. Therefore, reprocessing the previous data was impractical.
(6) The work raises a fundamental question: what is the mechanistic basis for the higher expression of cytoplasmic and mitochondrial proteins in ihPSCs? Conceivably, this could be due to two reasons: (a) Genes encoding cytoplasmic and mitochondrial proteins are expressed at a higher level in ihPSCs compared to hESC. (b) mRNAs encoding cytoplasmic and mitochondrial proteins are translated at a higher level in ihPSCs compared to hESC. The authors may check published transcriptome data from the same cell lines to shed light on this point.
This is a very interesting point. We believe that the reprogrammed cells contained mature mitochondria, which are not fully regressed upon reprogramming and that this can establish a growth advantage in the normoxic environments in which the cells are grown. Unfortunately, the available transcriptomic data lacked spike-ins, and thus only enables comparison of concentration, not of copy numbers13. Therefore, we could not determine with the available data if there was an increase in the copies of specific mRNAs. However, with a future study where there was a transcriptomic dataset with spike-ins included, this would be very interesting to analyse.
Reviewer #3 (Recommendations For The Authors):
It is unclear whether changes in protein levels relate to any phenotypic features of cell lines used. For example, the authors highlight that increased protein expression in hiPSC lines is consistent with the requirement to sustain high growth rates, but there is no data to demonstrate whether hiPSC lines used indeed have higher growth rates.
We respectfully disagree with the reviewer on this point. Our data show that hESCs and hiPSCs show significant differences in protein mass and cell size, with the MS data validated by the EZQ assay and FACS, while having no significant differences in their cell cycle profiles. Thus, increased size and protein content would require higher growth rates to sustain the increased mass, which is what we observe.
The authors claim that the cell cycle of the lines is unchanged. However, no details of the method for assessing the cell cycle were included so it is difficult to appreciate if this assessment was appropriately carried out and controlled for.
We apologise for this omission; the details have been included in the revised version of the manuscript.
Details and characterisation of iPSC and ESC lines used in this study are overall lacking. The lines used are merely listed in methods, but no references are included for published lines, how lines were obtained, what passage they were used at, their karyotype status etc. For details of basic characterisation, the authors should refer to the ISSC Standards for the use of human stem cells in research. In particular, the authors should consider whether any of the changes they see may be attributed to copy number variants in different lines.
We agree with the reviewer on this and refer to the reply above concerning this issue.
The expression data for markers of undifferentiated state in Figure 1a would ideally be shown by immunocytochemistry or flow cytometry as it is impossible to tell whether cultures are heterogeneous for marker expression.
We agree with the reviewer on this. FACS is indeed much more quantitative and a better method to study heterogeneity. However, we did not have protocols to study these markers using FACS.
TEM analysis should ideally be quantified.
We agree with the reviewer that it would be nice to have a quantitative measure.
All figure legends should explicitly state what graphs are representing (e.g. average/mean; how many replicates (biological or technical), which lines)? Some data is included in Methods (e.g. glutamine uptake), but not for all of the data (e.g. TEM).
We agree with the reviewer. These has been corrected in the revised version of the manuscript, with additional details included.
Validation experiments were performed typically on one or two cell lines, but the lines used were not consistent (e.g. wibj_2 versus H1 for respirometry and wibj_2, oaqd_3 versus SA121 and SA181 for glutamine uptake). Can the authors explain how the lines were chosen?
The validation experiments were performed at different time points, and the selection of lines reflected the availability of hiPSC and hESC lines within our stem cell facility at a given point in time.
We chose to use a range of different lines for comparison, rather than always comparing only one set of lines, to try to avoid a possible bias in our conclusions and thus to make the results more general.
The authors should acknowledge the need for further functional validation of the results related to immunosuppressive proteins.
We agree with the reviewer and have added a sentence in the discussion making this point explicitly.
Differences in H1 histones abundance were highlighted. Can the authors speculate as to the meaning of these differences?
Regarding H1 histones, our study of the literature, as well as discussions with with chromatin and histone experts, both within our institute and externally, have not shed light into what the differences could imply, based upon previous literature. We think therefore that this is a striking and interesting result that merits further study, but we have not yet been able to formulate a clear hypothesis on the consequences.
(1) Howden, A. J. M. et al. Quantitative analysis of T cell proteomes and environmental sensors during T cell differentiation. Nat Immunol, doi:10.1038/s41590-019-0495-x (2019).
(2) Marchingo, J. M., Sinclair, L. V., Howden, A. J. & Cantrell, D. A. Quantitative analysis of how Myc controls T cell proteomes and metabolic pathways during T cell activation. Elife 9, doi:10.7554/eLife.53725 (2020).
(3) Damasio, M. P. et al. Extracellular signal-regulated kinase (ERK) pathway control of CD8+ T cell differentiation. Biochem J 478, 79-98, doi:10.1042/BCJ20200661 (2021).
(4) Salerno, F. et al. An integrated proteome and transcriptome of B cell maturation defines poised activation states of transitional and mature B cells. Nat Commun 14, 5116, doi:10.1038/s41467-023-40621-2 (2023).
(5) Antico, O., Nirujogi, R. S. & Muqit, M. M. K. Whole proteome copy number dataset in primary mouse cortical neurons. Data Brief 49, 109336, doi:10.1016/j.dib.2023.109336 (2023).
(6) Edwards, W. et al. Quantitative proteomic profiling identifies global protein network dynamics in murine embryonic heart development. Dev Cell 58, 1087-1105 e1084, doi:10.1016/j.devcel.2023.04.011 (2023).
(7) Barton, P. R. et al. Super-killer CTLs are generated by single gene deletion of Bach2. Eur J Immunol 52, 1776-1788, doi:10.1002/eji.202249797 (2022).
(8) Phair, I. R., Sumoreeah, M. C., Scott, N., Spinelli, L. & Arthur, J. S. C. IL-33 induces granzyme C expression in murine mast cells via an MSK1/2-CREB-dependent pathway. Biosci Rep 42, doi:10.1042/BSR20221165 (2022).
(9) Niu, L. et al. Dynamic human liver proteome atlas reveals functional insights into disease pathways. Mol Syst Biol 18, e10947, doi:10.15252/msb.202210947 (2022).
(10) Murugesan, G., Davidson, L., Jannetti, L., Crocker, P. R. & Weigle, B. Quantitative Proteomics of Polarised Macrophages Derived from Induced Pluripotent Stem Cells. Biomedicines 10, doi:10.3390/biomedicines10020239 (2022).
(11) Ryan, D. G. et al. Nrf2 activation reprograms macrophage intermediary metabolism and suppresses the type I interferon response. iScience 25, 103827, doi:10.1016/j.isci.2022.103827 (2022).
(12) Nicolas, P. et al. Systems-level conservation of the proximal TCR signaling network of mice and humans. J Exp Med 219, doi:10.1084/jem.20211295 (2022).
(13) Brenes, A. J. et al. Erosion of human X chromosome inactivation causes major remodeling of the iPSC proteome. Cell Rep 35, 109032, doi:10.1016/j.celrep.2021.109032 (2021).
(14) Wisniewski, J. R., Hein, M. Y., Cox, J. & Mann, M. A "proteomic ruler" for protein copy number and concentration estimation without spike-in standards. Mol Cell Proteomics 13, 3497-3506, doi:10.1074/mcp.M113.037309 (2014).
(15) Kilpinen, H. et al. Common genetic variation drives molecular heterogeneity in human iPSCs. Nature 546, 370-375, doi:10.1038/nature22403 (2017).
(16) Phanstiel, D. H. et al. Proteomic and phosphoproteomic comparison of human ES and iPS cells. Nat Methods 8, 821-827, doi:10.1038/nmeth.1699 (2011).
(17) Munoz, J. et al. The quantitative proteomes of human-induced pluripotent stem cells and embryonic stem cells. Mol Syst Biol 7, 550, doi:10.1038/msb.2011.84 (2011).
-
eLife Assessment
This study reports differences in proteomic profiles of embryonic versus induced pluripotent stem cells. This important finding cautions against the interchangeable use of both types of cells in biomedical research, although the mechanisms responsible for these differences remains unknown. The proteomic evidence is convincing, even though there is limited validation with other methods.
-
Reviewer #1 (Public review):
Summary:
The authors compared four types of hiPSCs and four types of hESCs at the proteome level to determine their differences. Semiquantitative calculations of protein copy number revealed increased protein content in iPSCs. In particular, the results suggest that mitochondria- and cytoplasm-associated proteins in iPSCs reflect to some extent the state of the original differentiated cells. Basically, it contains responses to almost all comments and adds text mainly to the discussion. No additional experiments were performed in the revision, but I believe that future validation using methods other than proteomics would provide more support for the results.
Pros:
Mitochondrial function was verified by high-resolution respirometry, indicating increased ATP-producing capacity of the phosphorylation system in iPSCs.
Weaknesses:
The proteome data in this study may be the result of a simple examination of differences between the clones, and proteome data should be verified using various methods in the future.
-
Reviewer #2 (Public review):
Summary:
Pluripotent stem cells are powerful tools for understanding development, differentiation, and disease modeling. The capacity of stem cells to differentiate into various cell types holds great promise for therapeutic applications. However, ethical concerns restrict the use of human embryonic stem cells (hESCs). Consequently, induced human pluripotent stem cells (ihPSCs) offer an attractive alternative for modeling rare diseases, drug screening, and regenerative medicine. A comprehensive understanding of ihPSCs is crucial to establish their similarities and differences compared to hESCs. This work demonstrates systematic differences in the reprogramming of nuclear and non-nuclear proteomes in ihPSCs.
Strengths:
The authors employed quantitative mass spectrometry to compare protein expression differences between independently derived ihPSC and hESC cell lines. Qualitatively, protein expression profiles in ihPSC and hESC were found to be very similar. However, when comparing protein concentration at a cellular level, it became evident that ihPSCs express higher levels of proteins in the cytoplasm, mitochondria, and plasma membrane, while the expression of nuclear proteins is similar between ihPSCs and hESCs. A higher expression of proteins in ihPSCs was verified by an independent approach, and flow cytometry confirmed that ihPSCs had larger cell size than hESCs. The differences in protein expression were reflected in functional distinctions. For instance, the higher expression of mitochondrial metabolic enzymes, glutamine transporters, and lipid biosynthesis enzymes in ihPSCs was associated with enhanced mitochondrial potential, increased ability to uptake glutamine, and increased ability to form lipid droplets.
Weaknesses:
While this finding is intriguing and interesting, the study falls short of explaining the mechanistic reasons for the observed quantitative proteome differences. It remains unclear whether the increased expression of proteins in ihPSCs is due to enhanced transcription of the genes encoding this group of proteins or due to other reasons, for example, differences in mRNA translation efficiency. Another unresolved question pertains to how the cell type origin influences ihPSC proteomes. For instance, whether ihPSCs derived from fibroblasts, lymphocytes, and other cell types all exhibit differences in their cell size and increased expression of cytoplasmic and mitochondrial proteins. Analyzing ihPSCs derived from different cell types and by different investigators would be necessary to address these questions.
-
Reviewer #3 (Public review):
This study provides a useful insight into the proteomic analysis of several human induced pluripotent (hiPSC) and human embryonic stem cell (hESC) lines. Although the study is largely descriptive with limited validation of the differences found in the proteomic screen, the findings provide a solid platform for further mechanistic discovery.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important study advances our understanding of the temporal dynamics and cortical mechanisms of eye movements and the cognitive process of attention. The evidence supporting the conclusions is convincing and based on measuring the time course of the eye movement-attention interaction in a novel, carefully-controlled experimental task. This study will be of broad interest to psychologists and neuroscientists interested in the dynamics of cognitive processes.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
The main idea tested in this work is that host galectin-9 inhibits Mycobacterium tuberculosis (Mtb) growth by recognizing the Mtb cell wall component arabinogalactan (AG) and, as a result, disrupting mycobacterial cell wall structure. Moreover, a similar effect is achieved by anti-AG antibodies. While the hypothesis is intriguing and the work has the potential to make a valuable contribution to Mtb therapy, the evidence presented is incomplete and does not explain several critical points including the dose-independent effect of galectin-9 on Mtb growth and how anti-AG antibodies and galectin-9 access the AG layer of intact Mtb.
-
Reviewer #1 (Public review):
The molecular interactions which determine infection (and disease) trajectory following human exposure to Mycobacterium tuberculosis (Mtb) are critical to understanding mycobacterial pathogenicity and tuberculosis (TB), a global public health threat which disproportionately impacts a number of high-burden countries and, owing to the emergence of multidrug-resistant Mtb strains, is a major contributor to antimicrobial resistance (AMR). In this submission, Qin and colleagues extend their own previous work which identified a potential role for host galectin-9 in recognizing the major Mtb cell wall component, arabinogalactan (AG). First, the authors present data indicating that galectin-9 inhibits mycobacterial growth during in vitro culture in liquid and on solid media, and that the inhibition depends on carbohydrate recognition by galectin-9. Next, the authors identify anti-AG antibodies in sera of TB patients and use this observation to inform isolation of monoclonal anti-AG antibodies (mAbs) via an in vitro screen. Finally, they apply the identified anti-AG mAbs to inhibit Mtb growth in vitro via a mechanism which proteomic and microscopic analyses suggest is dependent on disruption of cell wall structure. In summary, the dual observation of (i) the apparent role of naturally arising host anti-AG antibodies to control infection and (ii) the potential utility of anti-AG monoclonal antibodies as novel anti-Mtb therapeutics is compelling; however, as noted in the comments below, the evidence presented to support these insights is not adequate and the authors should address the following:
(1) The experiment which utilizes lactose or glucose supplementation to infer the importance of carbohydrate recognition by galectin-9 cannot be interpreted unequivocally owing to the growth-enhancing effect of lactose supplementation on Mtb during liquid culture in vitro.
(2) Similar to the comment above, the apparent dose-independent effect of galectin-9 on Mtb growth in vitro is difficult to reconcile with the interpretation that galectin is functioning as claimed.
(3) The claimed differences in galectin-9 concentration in sera from tuberculin skin test (TST)-negative or TST-positive non-TB cases versus active TB patients are not immediately apparent from the data presented.
(4) Neither fluorescence microscopy nor electron microscopy analyses are supported by high-quality, interpretable images which, in the absence of supporting quantitative data, renders any claims of anti-AG mAb specificity (fluorescence microscopy) or putative mAb-mediated cell wall swelling (electron microscopy) highly speculative.
(5) Finally, the absence of any discussion of how anti-AG antibodies (similarly, galectin-9) gain access to the AG layer in the outer membrane of intact Mtb bacilli (which may additionally possess an extracellular capsule/coat) is a critical omission - situating these results in the context of current knowledge about Mtb cellular structure (especially the mycobacterial outer membrane) is essential for plausibility of the inferred galectin-9 and anti-AG mAb activities.
-
Reviewer #2 (Public review):
Summary:
In this manuscript, the authors work to extend their previous observation that galectin-9 interacts with arabinogalactans of Mtb in their EMBO reports 2021 manuscript. Here they provide evidence for the CARD2 domain of galectin-9 can inhibit the growth of Mtb in culture. In addition, antibodies that also bind to AG appear to inhibit Mtb growth in culture. These data indicate that independent of the common cell-associated responses to galectin-9 and antibodies, interaction of these proteins with AG of mycobacteria may have consequences for bacterial growth.
Strengths:
The authors provided several lines of evidence in culture media that the introduction of galectin-9 proteins and antibodies inhibit the growth rate of Mtb.
Weaknesses:
The methodology for generating and screening the anti-AG antibodies lacks pertinent details for recapitulating and interpreting the results.
The figure legends and methods associated with the microscopy assays lack sufficient details to appropriately interpret the experiments conducted.
The galectin-9 measured in the sera of TB patients does not approach the concentrations required for Mtb growth restriction in the in vitro assays performed by the authors. It remains difficult to envision how greater levels of galectin-9 release might contribute to Mtb control in severe forms of TB, since higher levels of serum Gal9 has been observed in other human studies and correlate with poorly controlled infection. The authors over-interpret the role of Gal9 in bacterial control during disease/infection without any evidence of impact on in vivo (animal model) control.
-
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public Review):
Question 1: The experiment that utilizes lactose or glucose supplementation to infer the importance of carbohydrate recognition by galectin-9 cannot be interpreted unequivocally owing to the growth-enhancing effect of lactose supplementation on Mtb during liquid culture in vitro.
Thank you for this very constructive comment. We repeated the experiments by lowering the concentration of lactose or AG from 10 μg/mL to 1 μg/mL. We found that low concentration of lactose or AG showed neglectable effect on Mtb growth, however, they still reversed the inhibitory effect of galectin-9 on mycobacterial growth (revised Fig. 2A, C). Therefore, we consider that the supplementation of lactose or AG reverse galectin-9 mediated inhibition of Mtb growth largely through its carbohydrate recognition rather than their growth-enhancing effect.
Question 2: Similar to the comment above, the apparent dose-independent effect of galectin-9 on Mtb growth in vitro is difficult to reconcile with the interpretation that galectin is functioning as claimed.
We thank the reviewer for the correction. Indeed, as the reviewer pointed out, galectin-9 inhibits Mtb growth in dose-independent manner. We had corrected the claim in the revised manuscript (Line 114).
Question 3: The claimed differences in galectin-9 concentration in sera from tuberculin skin test (TST)-negative or TST-positive non-TB cases versus active TB patients are not immediately apparent from the data presented.
We appreciate your concern. Previous samples are from a cohort set up in Max Plank Institute for Infection Biology. We have performed the detection of galectin-9 in sera in another independent cohort of active TB patients and healthy donors in China. And we found higher abundance of galectin-9 in serum from TB patients than that from heathy donors (revised Fig. 1E).
Question 4: Neither fluorescence microscopy nor electron microscopy analyses are supported by high-quality, interpretable images which, in the absence of supporting quantitative data, renders any claims of anti-AG mAb specificity (fluorescence microscopy) or putative mAb-mediated cell wall swelling (electron microscopy) highly speculative.
We appreciate your concern. We have improved the procedure of the immunofluorescence assay and obtained high-quality and interpretable images with quantitative data (revised Fig. 4F). As for electron microscopy analyses, we added clearer label indicating cell wall in revised manuscript (revised Fig. 7C).
Question 5: Finally, the absence of any discussion of how anti-AG antibodies (similarly, galectin-9) gain access to the AG layer in the outer membrane of intact Mtb bacilli (which may additionally possess an extracellular capsule/coat) is a critical omission - situating these results in the context of current knowledge about Mtb cellular structure (especially the mycobacterial outer membrane) is essential for plausibility of the inferred galectin-9 and anti-AG mAb activities.
Exactly, AG is hidden by mycolic acids in the outer layer of Mtb cell wall. As we have discussed in the Discussion part of previous manuscript (line 285), we speculate that during Mtb replication, cell wall synthesis is active and AG becomes exposed, thereby facilitating its binding to galectin-9 or AG antibody and leading to Mtb growth arrest. It’s highly possible that galectin-9 or AG antibody targets replicating Mtb.
To Reviewer #2 (Public Review):
Question 1: In light of other observations that cleaved galectin-9 levels in the plasma is a biomarker for severe infection (Padilla A et al Biomolecules 2021 and Iwasaki-Hozumi H et al. Biomoleucles 2021) it is difficult to reconcile the author's interpretation that the elevated gal-9 in Active TB patients (Figure 1E) contributes to the maintenance of latent infection in humans. The authors should consider incorporating these observations in the interpretation of their own results.
Thank you for these very insightful comments. We observed elevated levels of galectin-9 in the serum of active TB patients, consistent with reports indicating that cleaved galectin-9 levels in the serum serve as a biomarker for severe infection (Iwasaki-Hozumi et al., 2021; Padilla et al., 2020). We consider that the elevated levels of galectin-9 in the serum of active TB may be an indicator of the host immune response to Mtb infection, however, the magnitude of elevated galectin-9 is not sufficient to control Mtb infection and maintain latent infection. This is highly similar to other protective immune factors such as interferon gamma, which is elevated in active TB as well (El-Masry et al., 2007; Hasan et al., 2009). We have included the discussion in the revised manuscript (line 298).
Question 2: The anti-AG titers were measured only in individuals with active TB (Figure 3C), generally thought to be a less protective immunological state. The speculation that individuals with anti-AG titers have some protection is not founded. Further only 2 mAbs were tested to demonstrate restriction of Mtb in culture. It is possible that clones of different affinities for AG present within a patient's polyclonal AG-antibody responses may or may not display a direct growth restriction pressure on Mtb in culture. The authors should soften the claims about the presence of AG-titers in TB patients being indicative of protection.
We appreciate your concern. As per your suggestion, we have softened the claim to that “We speculate that during Mtb infection, anti-AG IgG antibodies are induced, which potentially contribute to protection against TB by directly inhibiting Mtb replication albeit seemingly in vain.”
References
El-Masry, S., Lotfy, M., Nasif, W.A., El-Kady, I.M., and Al-Badrawy, M. (2007). Elevated serum level of interleukin (IL)-18, interferon (IFN)-gamma and soluble Fas in patients with pulmonary complications in tuberculosis. Acta microbiologica et immunologica Hungarica 54, 65-77.
Hasan, Z., Jamil, B., Khan, J., Ali, R., Khan, M.A., Nasir, N., Yusuf, M.S., Jamil, S., Irfan, M., and Hussain, R. (2009). Relationship between circulating levels of IFN-gamma, IL-10, CXCL9 and CCL2 in pulmonary and extrapulmonary tuberculosis is dependent on disease severity. Scandinavian journal of immunology 69, 259-267.
Iwasaki-Hozumi, H., Chagan-Yasutan, H., Ashino, Y., and Hattori, T. (2021). Blood Levels of Galectin-9, an Immuno-Regulating Molecule, Reflect the Severity for the Acute and Chronic Infectious Diseases. Biomolecules 11.
Padilla, S.T., Niki, T., Furushima, D., Bai, G., Chagan-Yasutan, H., Telan, E.F., Tactacan-Abrenica, R.J., Maeda, Y., Solante, R., and Hattori, T. (2020). Plasma Levels of a Cleaved Form of Galectin-9 Are the Most Sensitive Biomarkers of Acquired Immune Deficiency Syndrome and Tuberculosis Coinfection. Biomolecules 10.
-
-
www.biorxiv.org www.biorxiv.org
-
eLife Assessment
This important work uses in vivo foveal cone-resolved imaging and simultaneous microscopic photostimulation to investigate the relationship between ocular drift - eye movements long thought to be random - and visual acuity. The surprising result is that ocular drift is systematic - causing the object to move to the center of the cone mosaic over the course of each perceptual trial. The tools used to reach this conclusion are state-of-the-art and the evidence presented is convincing. This work advances our understanding of the visuomotor system and the interplay of anatomy, oculomotor behavior, and visual acuity.
-
Reviewer #1 (Public review):
Summary:
This paper investigates the relationship between ocular drift - eye movements long thought to be random - and visual acuity. This is a fundamental issue for how vision works. The work uses adaptive optics retinal imaging to monitor eye movements and where a target object is in the cone photoreceptor array. The surprising result is that ocular drift is systematic - causing the object to move to the center of the cone mosaic over the course of each perceptual trial. The tools used to reach this conclusion are state-of-the-art and the evidence presented is convincing.
Strengths
The central question of the paper is interesting, as far as I know, it has not been answered in past work, and the approaches employed in this work are appropriate and provide clear answers.
The central finding - that ocular drift is not a completely random process - is important and has a broad impact on how we think about the relationship between eye movements and visual perception.
The presentation is quite nice: the figures clearly illustrate key points and have a nice mix of primary and analyzed data, and the writing (with one important exception) is generally clear.
Weaknesses
The primary concern I had about the previous version of the manuscript was how the Nyquist limit was described. The changes the authors made have improved this substantially in the current version.
-
Reviewer #2 (Public review):
Summary:
In this work, Witten et al. assess visual acuity, cone density, and fixational behavior in the central foveal region in a large number of subjects.<br /> This work elegantly presents a number of important findings, and I can see this becoming a landmark work in the field. First, it shows that acuity is determined by the cone mosaic, hence, subjects characterized by higher cone densities show higher acuity in diffraction limited settings. Second, it shows that humans can achieve higher visual resolution than what is dictated by cone sampling, suggesting that this is likely the result of fixational drift, which constantly moves the stimuli over the cone mosaic. Third, the study reports a correlation between the amplitude of fixational motion and acuity, namely, subjects with smaller drifts have higher acuities and higher cone density. Fourth, it is shown that humans tend to move the fixated object toward the region of higher cone density in the retina, lending further support to the idea that drift is not a random process, but is likely controlled. This is a beautiful and unique work that furthers our understanding of the visuomotor system and the interplay of anatomy, oculomotor behavior, and visual acuity.
Strengths:
The work is rigorously conducted, it uses state-of-the-art technology to record fixational eye movements while imaging the central fovea at high resolution, and examines exactly where the viewed stimulus falls on individuals' foveal cone mosaic with respect to different anatomical landmarks in this region. Figures are clear and nicely packaged. It is important to emphasize that this study is a real tour-de-force in which the authors collected a massive amount of data on 20 subjects. This is particularly remarkable considering how challenging it is to run psychophysics experiments using this sophisticated technology. Most of the studies using psychophysics with AO are, indeed, limited to a few subjects. Therefore, this work shows a unique set of data, filling a gap in the literature.
Weaknesses:
Data analysis has been improved after the first round of review. The revised version of the manuscript is solid, and there are no weaknesses that should be addressed. The authors added more statistical tests and analyses, reported comparable effects even when different metrics are used (e.g., diffusion constant), and removed the confusing text on myopia. I think this work represents a significant scientific contribution to vision science.
-
Reviewer #3 (Public review):
Summary:
The manuscript by Witten et al., aims to investigate the link between acuity thresholds (and hyperacuity) and retinal sampling. Specifically, using in vivo foveal cone-resolved imaging and simultaneous microscopic photo stimulation, the researchers examined visual acuity thresholds in 16 volunteers and correlated them with each individual's retinal sampling capacity and the characteristics of ocular drift.
First, the authors found that although visual acuity was highly correlated with the individual spatial arrangement of cones, for all participants, visual resolution exceeded the Nyquist sampling.
Thus, the researchers hypothesized that this increase in acuity, which could not be explained in terms of spatial encoding mechanisms, might result from exploiting the spatiotemporal characteristics of the visual input associated with the dynamics of the fixational eye movements (and ocular drift in particular).
The authors reported a correlation between acuity threshold and drift amplitude, suggesting that the visual system benefits from transforming spatial input into a spatiotemporal flow. Finally, they showed that drift, contrary to the traditional view of it as random involuntary movement, appears to exhibit directionality: drift tends to move stimuli to higher cone density areas, therefore enhancing visual resolution.
I find the work of broad interest, its methods are clear, and the results solid.
-
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public Review):
Summary:
This paper investigates the relationship between ocular drift - eye movements long thought to be random - and visual acuity. This is a fundamental issue for how vision works. The work uses adaptive optics retinal imaging to monitor eye movements and where a target object is in the cone photoreceptor array. The surprising result is that ocular drift is systematic - causing the object to move to the center of the cone mosaic over the course of each perceptual trial. The tools used to reach this conclusion are state-of-the-art and the evidence presented is convincing.
Strengths
P1.1. The central question of the paper is interesting, as far as I know, it has not been answered in past work, and the approaches employed in this work are appropriate and provide clear answers.
P1.2. The central finding - that ocular drift is not a completely random process - is important and has a broad impact on how we think about the relationship between eye movements and visual perception.
P1.3. The presentation is quite nice: the figures clearly illustrate key points and have a nice mix of primary and analyzed data, and the writing (with one important exception) is generally clear.
Thank you for your positive feedback.
Weaknesses
P1.4. The handling of the Nyquist limit is confusing throughout the paper and could be improved. It is not clear (at least to me) how the Nyquist limit applies to the specific task considered. I think of the Nyquist limit as saying that spatial frequencies above a certain cutoff set by the cone spacing are being aliased and cannot be disambiguated from the structure at a lower spatial frequency. In other words, there is a limit to the spatial frequency content that can be uniquely represented by discrete cone sampling locations. Acuity beyond that limit is certainly possible with a stationary image - e.g. a line will set up a distribution of responses in the cones that it covers, and without noise, an arbitrarily small displacement of the line would change the distribution of cone responses in a way that could be resolved. This is an important point because it relates to whether some kind of active sampling or movement of the detectors is needed to explain the spatial resolution results in the paper. This issue comes up in the introduction, results, and discussion. It arises in particular in the two Discussion paragraphs starting on line 343.
We thank you for pointing out a possible confusion for readers. Overall, we contrast our results to the static Nyquist limit because it is generally regarded as the upper limit of resolution acuity. We updated our text in a few places, especially the Discussion, and added a reference to make our use of the Nyquist limit clearer.
We agree with the reviewer of how the Nyquist limit is interpreted within the context of visual structure. If visual structure is under-sampled, it is not lost, but creates new, interfered visual structure at lower spatial frequency. For regular patterns like gratings, interference patterns may emerge akin to Moire patterns, which have been shown to occur in the human eye, and which form is based on the arrangement and regularity of the photoreceptor mosaic (Williams, 1985). We note however that the successful resolution of the lower frequency pattern does not necessarily carry the same structural information, specifically, orientation, and the aliased structure might indeed mask the original stimulus. Please compare Figure 1f where we show individual static snapshots of such aliased patterns, especially visible when the optotypes are small (towards the lower right of the figure). We note that theoretical work predicts that with prior knowledge about the stimulus, even such static images might be possible to de-alias (Ruderman & Bialek, 1992). We added this to our manuscript.
We think the reviewer’s following point about the resolution of a line position, is only partially connected to the first, however. In our manuscript we note in the Introduction that resolution of the relative position of visual objects is a so called hyperacuity phenomenon. The fact that it occurs in humans and other animals demonstrates that visual brains have come up with neuronal mechanisms to determine relative stimulus position with sub-Nyquist resolution. The exact mechanism is however not fully clear. One solution is that relative cone signal intensities could be harnessed, similar as is employed technically, e.g. in a quadrant-cell detector. Its positional precision is much higher than the individual cell’s size (or Nyquist limit), predominantly determined by the detector’s sensitivity and to a lesser degree its size. On the other hand, such detector, being hyperacute with object location, would not have the same resolution as, for instance, letter-E orientation discrimination.
Note that in all the above occasions, a static image-sensor-relationship is assumed. In our paper, we were aiming to convey, like others did before, that a moving stimulus may give rise to sub-Nyquist structural resolution, beyond what is already known for positional acuity and hence, classical hyperacuity.
Based on the data shown in this manuscript and other experimental data currently collected in the lab, it seems to us that eye movements are indeed the crucial point in achieving sub-Nyquist resolution. For example, ultra-short presentation durations, allowing virtually no retinal slip, push thresholds close to the Nyquist limit and above. Furthermore, with AOSLO stimulation, it is possible to stabilize a stimulus on the retina, which would be a useful tool studying this hypothesis. Our current level of stabilization is however not accurate enough to completely mitigate retinal image motion in the foveola, where cells are smallest, and transients could occur. From what we observe and other studies that looked at resolution thresholds at more peripheral retinal locations, we would predict that foveolar resolution of a perfectly stabilized stimulus would be indeed limited by the Nyquist limit of the receptor mosaic.
P1.5. One question that came up as I read the paper was whether the eye movement parameters depend on the size of the E. In other words, to what extent is ocular drift tuned to specific behavioral tasks?
This is an interesting question. Yet, the experimental data collected for the current manuscript does not contain enough dispersion in target size to give a definitive answer, unfortunately. A larger range of stimulus sizes and especially a similar number of trials per size would be required. Nonetheless, when individual trials were re-grouped to percentiles of all stimulus sizes (scaled for each eye individually), we found that drift length and directionality was not significantly different between any percentile group of stimulus sizes (Wilcoxon sign rank test, p > 0.12, see also Figure R1). Our experimental trials started with a stimulus demanding visual acuity of 20/16 (logMAR = -0.1), therefore all presented stimulus sizes were rather close to threshold. The high visual demand in this AO resolution task might bring the oculomotor system to a limit, where ocular drift length can’t be decreased further. However, with the limitation due to the small range of stimulus sizes, further investigations would be needed. Given this and that this topic is also ongoing research in our lab where also more complex dynamics of FEM patterns are considered, we refrain from showing this analysis in the current manuscript.
Author response image 1.
Drift length does not depend on stimulus sizes close to threshold. All experimental trials were sorted by stimulus size and then grouped into percentiles for each participant (left). Additionally, 10 % of trials with stimulus sizes just above or below threshold are shown for comparison (right). For each group, median drift lengths (z-scored) are shown as box and whiskers plot. Drift length was not significantly different across groups.
Reviewer #2 (Public Review):
Summary:
In this work, Witten et al. assess visual acuity, cone density, and fixational behavior in the central foveal region in a large number of subjects.
This work elegantly presents a number of important findings, and I can see this becoming a landmark work in the field. First, it shows that acuity is determined by the cone mosaic, hence, subjects characterized by higher cone densities show higher acuity in diffraction-limited settings. Second, it shows that humans can achieve higher visual resolution than what is dictated by cone sampling, suggesting that this is likely the result of fixational drift, which constantly moves the stimuli over the cone mosaic. Third, the study reports a correlation between the amplitude of fixational motion and acuity, namely, subjects with smaller drifts have higher acuities and higher cone density. Fourth, it is shown that humans tend to move the fixated object toward the region of higher cone density in the retina, lending further support to the idea that drift is not a random process, but is likely controlled. This is a beautiful and unique work that furthers our understanding of the visuomotor system and the interplay of anatomy, oculomotor behavior, and visual acuity.
Strengths:
P2.1. The work is rigorously conducted, it uses state-of-the-art technology to record fixational eye movements while imaging the central fovea at high resolution and examines exactly where the viewed stimulus falls on individuals' foveal cone mosaic with respect to different anatomical landmarks in this region. The figures are clear and nicely packaged. It is important to emphasize that this study is a real tour-de-force in which the authors collected a massive amount of data on 20 subjects. This is particularly remarkable considering how challenging it is to run psychophysics experiments using this sophisticated technology. Most of the studies using psychophysics with AO are, indeed, limited to a few subjects. Therefore, this work shows a unique set of data, filling a gap in the literature.
Thank you, we are very grateful for your positive feedback.
Weaknesses:
P2.2. No major weakness was noted, but data analysis could be further improved by examining drift instantaneous direction rather than start-point-end-point direction, and by adding a statistical quantification of the difference in direction tuning between the three anatomical landmarks considered.
Thank you for these two suggestions. We now show the development of directionality with time (after the first frame, 33 ms as well as 165 ms, 330 ms and 462 ms), and performed a Rayleigh test for non-uniformity of circular data. Please also see our response to comment R2.4.
Briefly, directional tuning was already visible at 33 ms after stimulus onset and continuously increases with longer analysis duration. Directionality is thus not pronounced at shorter analysis windows. These results have been added to the text and figures (Figure 4 - figure supplement 1).
The statistical tests showed that circular sample directionality was not uniformly distributed for all three retinal locations. The circular average was between -10 and 10 ° in all cases and the variance was decreasing with increasing time (from 48.5 ° to 34.3 ° for CDC, 49.6 ° to 38.6 ° for PRL and 53.9 ° to 43.4 for PCD location, between frame 2 and 15). As we have discussed in the paper, we would expect all three locations to come out as significant, given their vicinity to the CDC (which is systematic in the case of PRL, and random in the case of PCD, see also comment R2.2).
Reviewer #3 (Public Review):
Summary:
The manuscript by Witten et al., titled "Sub-cone visual resolution by active, adaptive sampling in the human foveola," aims to investigate the link between acuity thresholds (and hyperacuity) and retinal sampling. Specifically, using in vivo foveal cone-resolved imaging and simultaneous microscopic photostimulation, the researchers examined visual acuity thresholds in 16 volunteers and correlated them with each individual's retinal sampling capacity and the characteristics of ocular drift.
First, the authors found that although visual acuity was highly correlated with the individual spatial arrangement of cones, for all participants, visual resolution exceeded the Nyquist sampling limit - a well-known phenomenon in the literature called hyperacuity.
Thus, the researchers hypothesized that this increase in acuity, which could not be explained in terms of spatial encoding mechanisms, might result from exploiting the spatiotemporal characteristics of visual input, which is continuously modulated over time by eye movements even during so-called fixations (e.g., ocular drift).
Authors reported a correlation between subjects, between acuity threshold and drift amplitude, suggesting that the visual system benefits from transforming spatial input into a spatiotemporal flow. Finally, they showed that drift, contrary to the traditional view of it as random involuntary movement, appears to exhibit directionality: drift tends to move stimuli to higher cone density areas, therefore enhancing visual resolution.
Strengths:
P3.1. The work is of broad interest, the methods are clear, and the results are solid.
Thank you.
Weaknesses:
P3.2. Literature (1/2): The authors do not appear to be aware of an important paper published in 2023 by Lin et al. (https://doi.org/10.1016/j.cub.2023.03.026), which nicely demonstrates that (i) ocular drifts are under cognitive influence, and (ii) specific task knowledge influences the dominant orientation of these ocular drifts even in the absence of visual information. The results of this article are particularly relevant and should be discussed in light of the findings of the current experiment.
Thank you for pointing to this important work which we were aware of. It simply slipped through during writing. It is now discussed in lines 390-393.
P3.3. Literature (2/2): The hypothesis that hyperacuity is attributable to ocular movements has been proposed by other authors and should be cited and discussed (e.g., https://doi.org/10.3389/fncom.2012.00089, https://doi.org/10.10
Thank you for pointing us towards these works which we have now added to the Discussion section. We would like to stress however, that we see a distinction between classical hyperacuity phenomena (Vernier, stereo, centering, etc.) as a form of positional acuity, and orientation discrimination.
P3.4. Drift Dynamic Characterization: The drift is primarily characterized as the "concatenated vector sum of all frame-wise motion vectors within the 500 ms stimulus duration.". To better compare with other studies investigating the link between drift dynamics and visual acuity (e.g., Clark et al., 2022), it would be interesting to analyze the drift-diffusion constant, which might be the parameter most capable of describing the dynamic characteristics of drift.
During our analysis, we have computed the diffusion coefficient (D) and it showed qualitatively similar results to the drift length (see figures below). We decided to not show these results, because we are convinced that D is indeed not the most capable parameter to describe the typical drift characteristic seen here. The diffusion coefficient is computed as the slope of the mean square displacement (MSD). In our view, there are two main issues with applying this metric to our data, one conceptual, one factual:
(1) Computation of a diffusion coefficient is based upon the assumption that the underlying movement is similar to a random walk process. From a historical perspective, where drift has been regarded as more random, this makes sense. We also agree that D can serve as a valuable metric, depending on the individual research question. In our data, however, we clearly show that drift is not random, and a metric quantifying randomness is thus ill-defined.
(2) We often observed out- and in-type motion traces, i.e. where the eye somewhat backtracks from where it started. Traces in this case are equally long (and fast) as other motion will be with a singular direction, but D would in this case be much smaller, as the MSD first increases and then decreases. In reality, the same number of cones would have been traversed as with the larger D of straight outward movement, albeit not unique cones. For our current analyses, the drift length captures this relationship better.
Author response image 2.
Diffusion coefficient (D) and the relation to visual acuity (see Figure 3 e-g for comparison to drift length). a, D was strongly correlated between fellow eyes. b, Cone density and D were not significantly correlated. c, The median D had a moderate correlation with visual acuity thresholds in dominant as well as non-dominant eyes. Dominant eyes are indicated by filled, nondominant eyes by open markers.
We would like to put forward that, in general, better metrics are needed, especially in respect to the visual signals arising from the moving eye. We are actively looking into this in follow-up work, and we hope that the current manuscript might spark also others to come up with new ways of characterizing the fine movements of the eye during fixation.
P3.5. Possible inconsistencies: Binocular differences are not expected based on the hypothesis; the authors may speculate a bit more about this. Additionally, the fact that hyperacuity does not occur with longer infrared wavelengths but the drift dynamics do not vary between the two conditions is interesting and should be discussed more thoroughly.
Binocularity: the differences in performance between fellow eyes is rather subtle, and we do not have a firm grip on differences other than the cone mosaic and fixational motor behavior between the two eyes. We would rather not speculate beyond what we already do, namely that some factor related to the development of ocular dominance is at play. What we do show with our data is that cone density and drift patterns seem to have no part in it.
Effect of wavelength: even with the longer 840 nm wavelength, most eyes resolve below the Nyquist limit, with a general increase in thresholds (getting worse) compared to 788 nm. As we wrote in the manuscript, we assume that the increased image blur and reduced cone contrast introduced by the longer wavelength are key to why there is an overall reduction in acuity. No changes were made to the manuscript. As a more general remark, we would not consider the sub-Nyquist performances seen in our data to be a hyperacuity, although technically it is. The reason is that hyperacuity is usually associated with stimuli that require resolving positional shifts, and not orientation. There is a log unit of difference between thresholds in these tasks.
P3.6. As a Suggestion: can the authors predict the accuracy of individual participants in single trials just by looking at the drift dynamics?
That’s a very interesting point that we indeed currently look at in another project. As a comment, we can add that by purely looking at the drift dynamics in the current data, we could not predict the accuracy (percent correct) of the participant. When comparing drift length or diffusion coefficients between trials with correct or false response, we do not observe a significant difference. Also, when adding an anatomical correlate and compare between trials where sampling density increases or decreases, there is no significant trend. We think that it is a more complex interplay between all the influencing factors that can perhaps be met by a model considering all drift dynamics, photoreceptor geometry and stimulus characteristics.
No changes were made to the manuscript.
Recommendations for the authors:
Reviewing Editor (Recommendations For The Authors):
As you will see, the reviewers were quite enthusiastic about your work, but have a few issues for your consideration. We hope that this is helpful. We'll consider any revisions in composing a final eLife assessment.
Reviewer #1 (Recommendations For The Authors):
R1.1: Discussion of myopia. Myopia takes a fair bit of space in the Discussion, but the paper does not include any subjects that are sufficiently myopic to test the predictions. I would suggest reducing the amount of space devoted to this issue, and instead making the prediction that myopia may help with resolution quickly. The introduction (lines 54-56) left me expecting a test of this hypothesis, and I think similarly that issue could be left out of the introduction.
We have removed this part from the Introduction and shortened the Discussion.
R1.2: Line 118: define CDC here.
Thank you for pointing this out, it is now defined at this location.
R1.3: Line 159-162: suggest breaking this sentence into two. This sentence also serves as a transition to the next section, but the wording suggests it is a result that is shown in the prior section. Suggest rewording to make the transition part clear. Maybe something like "Hence the spatial arrangement of cones only partially ... . Next we show that ocular motion and the associated ... are another important factor."
Text was changed as suggested.
R1.4.: Figure 3: The retina images are a bit hard to see - suggest making them larger to take an entire row. As a reader, I also was wondering about the temporal progression of the drift trajectories and the relation to the CDC. Since you get to that in Figure 4, you could clarify in the text that you are starting by analyzing distance traveled and will return to the issue of directed trajectories.
Visibility was probably an issue during the initial submission and review process where images were produced at lower resolution. The original figures are of sufficient resolution to fully appreciate the underlying cone mosaic and will later be able to zoom in the online publication.
We added a mention of the order of analysis in the Results section (LL 163-165)
R1.5: Line 176: define "sum of piecewise drift amplitude" (e.g. refer to Figure where it is defined).
We refer to this metric now as the drift length (as pointed out rightfully so by reviewer #2), and added its definition at this location.
R1.6: Lines 205-208: suggest clarifying this sentence is a transition to the next section. As for the earlier sentence mentioned above, this sounds like a result rather than a transition to an issue you will consider next.
This sentence was changed to make the transition clearer.
R1.7: Line 225: suggest starting a new paragraph here.
Done as suggested
Reviewer #2 (Recommendations For The Authors):
I don't have any major concerns, mostly suggestions and minor comments.
R2.1: (1) The authors use piecewise amplitude as a measure of the amount of retinal motion introduced by ocular drift. However, to me, this sounds like what is normally referred to as the path length of a trace rather than its amplitude. I would suggest using the term length rather than amplitude, as amplitude is normally considered the distance between the starting and the ending point of a trace.
This was changed as suggested throughout the manuscript.
R2.2: (2) It would be useful to elaborate more on the difference between CDC and PCD, I know the authors do this in other publications, but to the naïve reader, it comes a bit as a surprise that drift directionality is toward the CDC but less so toward the PCD. Is the difference between these metrics simply related to the fact that defining the PCD location is more susceptible to errors, especially if image quality is not optimal? If indeed the PCD is the point of peak cone density, assuming no errors or variability in the estimation of this point, shouldn't we expect drift moving stimuli toward this point, as the CDC will be characterized by a slightly lower density? I.e., is the absence of a PCD directionality trend as strong as the trend seen for the CDC simply the result of variability and error in the estimate of the PCD or it is primarily due to the distribution of cone density not being symmetrical around the PCD?
Thank you for this comment. We already refer in the Methods section to the respective papers where this difference is analyzed in more detail, and shortly discuss it here.
To briefly answer the reviewer’s final question: PCD location is too variable, and ought to be avoided as a retinal landmark. While we believe there is value in reporting the PCD as a metric of maximum density, it has been shown recently (Reiniger et al., 2021; Warr et al., 2024; Wynne et al., 2022) and is visible in our own (partly unpublished) data, that its location will change with changing one or more of these factors: cone density metric, window size or cone quantity selected, cone annotation quality, image quality (e.g. across days), individual grader, annotation software, and likely more. Each of these factors alone can change the PCD location quite drastically, all while of course, the retina does not change. The CDC on the other hand, given its low-pass filtering nature, is immune to the aforementioned changes within a much wider range and will thus reflect the anatomical and, shown here, functional center of vision, better. However, there will always be individual eyes where PCD location and the CDC are close, and thus researchers might be inclined to also use the PCD as a landmark. We strongly advise against this. In a way, the PCD is a non-sense location while its dimension, density, can be a valuable metric, as density does not vary that much (see e.g. data on CDC density and PCD density reported in this manuscript).
Below we append a direct comparison of PCD vs CDC location stability when only one of the mentioned factors are changed. Sixteen retinas imaged on two different days were annotated and analyzed by the same grader with the same approach, and the difference in both locations are shown.
Author response image 3.
Reproducibility of CDC and PCD location in comparison. Two retinal mosaics which were recorded at two different timepoints, maximum 1 year apart from each other, were compared for 16 eyes. The retinal mosaics were carefully aligned. The retinal locations for CDC and PCD that were computed for the first timepoint were used as the spatial anchor (coordinate center), the locations plotted here as red circles (CDC) and gray diamonds (PCD) represent the deviations that were measured at the second timepoint for both metrics.
R2.3.: I don't see a statistical comparison between the drift angle tuning for CDC, PRL, and PCD. The distributions in Figure 4F look very similar and all with a relatively wide std. It would be useful to mark the mean of the distributions and report statistical tests. What are the data shown in this figure, single subjects, all subjects pooled together, average across subjects? Please specify in the caption.
We added a Rayleigh test to test each distribution for nun-uniformity and Kolmogorov-Smirnov tests to compare the distributions towards the different landmarks. We added the missing specifications to the figure caption of Figure 4 – figure supplement 1.
R2.4: I would suggest also calculating drift direction based on the average instantaneous drift velocity, similarly to what is done with amplitude. From Figure 3B it is clear that some drifts are more curved than others. For curved drifts with small amplitudes the start-point- end-point (SE) direction is not very meaningful and it is not a good representation of the overall directionality of the segment. Some drifts also seem to be monotonic and then change direction (eg. the last three examples from participant 10). In this case, the SE direction is likely quite different from the average instantaneous direction. I suspect that if direction is calculated this way it may show the trend of drifting toward the CDC more clearly.
In response to this and a comment of reviewer #1, we add a calculation of initial drift direction (and for increasing duration) and show it in Figure 4 – figure supplement 1. By doing so, we hope to capture initial directionality, irrespective of whether later parts in the path change direction. We find that directionality increases with increasing presentation duration.
R2.5: I find the discussion point on myopia a bit confusing. Considering that this is a rather tangential point and there are only two myopic participants, I would suggest either removing it from the discussion or explaining it more clearly.
We changed this section, also in response to comment R1.1.
R2.6: I would suggest adding to the discussion more elaboration on how these results may relate to acuity in normal conditions (in the presence of optical aberrations). For example, will this relationship between sampling cone density and visual acuity also hold natural viewing conditions?
We added only a half sentence to the first paragraph of the discussion. We are hesitant to extend this because there is very likely a non-straightforward relationship between acuity in normal and fully corrected conditions. We would predict that, if each eye were given the same type and magnitude of aberrations (similar to what we achieved by removing them), cone density will be the most prominent factor of acuity differences. Given that individual aberrations can vary substantially between eyes, this effect will be diluted, up to the point where aberrations will be the most important factor to acuity. As an example, under natural viewing conditions, pupil size will dominantly modulate the magnitude of aberrations.
R2.7: Line 398 - the point on the superdiffusive nature of drift comes out of the blue and it is unclear. What is it meant by "superdiffusive"?
We simply wanted to express that some drift properties seem to be adaptable while others aren’t. The text was changed at this location to remove this seemingly unmotivated term.
R2.8: Although it is true that drift has been assumed to be a random motion, there has been mounting evidence, especially in recent years, showing a degree of control and knowledge about ocular drift (eg. Poletti et al, 2015, JN; Lin et al, 2023, Current Biology).
We agree, of course. We mention this fact several times in the paper and adjusted some sentences to prevent misunderstandings. The mentioned papers are now cited in the Discussion.
R2.9: Reference 23 is out of context and should be removed as it deals with the control of fine spatial attention in the foveola rather than microsaccades or drift.
We removed this reference.
R2.10: Minor point: Figures appear to be low resolution in the pdf.
This seemed to have been an issue with the submission process. All figures will be available in high resolution in the final online version.
R2.11: Figure S3, it would be useful to mark the CDC at the center with a different color maybe shaded so it can be visible also on the plot on the left.
We changed the color and added a small amount of transparency to the PRL markers to make the CDC marker more visible.
R2.12: Figure S2, it would be useful to show the same graphs with respect to the PCD and PRL and maybe highlight the subjects who showed the largest (or smallest) distance between PRL and CDC).
Please find new Figure 4 supplement 1, which contains this information in the group histograms. Also, Figure 4 supplement 2 is now ordered by the distance PRL-CDC (while the participant naming is kept as maximum acuity exhibited. In this way, it should be possible to infer the information of whether PRL-CDC distance plays a role. For us it does not seem to be crucial. Rather, stimulus onset and drift length were related, which is captured in Figure 4g.
R2.13: There is a typo in Line 410.
We could not find a typo in this line, nor in the ones above and below. “Interindividual” was written on purpose, maybe “intraindividual” was expected? No changes were made to the text.
References
Reiniger, J. L., Domdei, N., Holz, F. G., & Harmening, W. M. (2021). Human gaze is systematically offset from the center of cone topography. Current Biology, 31(18), 4188–4193. https://doi.org/10.1016/j.cub.2021.07.005
Ruderman, D. L., & Bialek, W. (1992). Seeing Beyond the Nyquist Limit. Neural Computation, 4(5), 682–690. https://doi.org/10.1162/neco.1992.4.5.682
Warr, E., Grieshop, J., Cooper, R. F., & Carroll, J. (2024). The effect of sampling window size on topographical maps of foveal cone density. Frontiers in Ophthalmology, 4, 1348950. https://doi.org/10.3389/fopht.2024.1348950
Williams, D. R. (1985). Aliasing in human foveal vision. Vision Research, 25(2), 195–205. https://doi.org/10.1016/0042-6989(85)90113-0
Wynne, N., Cava, J. A., Gaffney, M., Heitkotter, H., Scheidt, A., Reiniger, J. L., Grieshop, J., Yang, K., Harmening, W. M., Cooper, R. F., & Carroll, J. (2022). Intergrader agreement of foveal cone topography measured using adaptive optics scanning light ophthalmoscopy. Biomedical Optics Express, 13(8), 4445–4454. https://doi.org/10.1364/boe.460821
-
-
kit.riveractionuk.com kit.riveractionuk.com
-
Dan to create the video for under the title.
-
Back to home
I think its strange to have the BACK TO HOME sign here. Could it instead be resources?
-
-
static1.squarespace.com static1.squarespace.com
-
it’s not the best place to go just to figure out life. Those decisions needto be made before ever a student steps foot on campus
Very very valid point
-
Nor should we as a society continue to tout college as the better choice, because that depends on thestudent’s career and lifestyle goals
Individualized experience
-