- Nov 2024
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I'm going to be covering EC2, auto scaling groups which is how we can configure EC2 to scale automatically based on demand placed on the system.
Auto scaling groups are generally used together with elastic load balances and launch templates to deliver elastic architectures.
Now we've got a lot to cover so let's jump in and get started.
Auto scaling groups do one thing.
They provide auto scaling for EC2.
Strictly speaking they can also be used to implement a self healing architecture as part of that scaling or in isolation.
Now auto scaling groups make use of configuration defined within launch templates or launch configurations and that's how they know what to provision.
An auto scaling group uses one launch configuration or one specific version of a launch template which is linked to it.
You can change which of those is associated but it's one of them at a time and so all instances launched using the auto scaling group are based on this single configuration definition either defined inside a specific version of a launch template or within a launch configuration.
Now an auto scaling group has three super important values associated with it.
We've got the minimum size, the desired capacity and the maximum size and these are often referred to as min, desired and max and can often be expressed as x, y or z.
For example 1, 2, 4 means 1 minimum, 2 desired and 4 maximum.
Now an auto scaling group has one foundational job which it performs.
It keeps the number of running EC2 instances the same as the desired capacity and it does this by provisioning or terminating instances.
So the desired capacity always has to be more than the minimum size and less than the maximum size.
If you have a desired capacity of 2 but only one running EC2 instance then the auto scaling group provisions a new instance.
If you have a desired capacity of 2 but have three running EC2 instances then the auto scaling group will terminate an instance to make these two values match.
Now you can keep an auto scaling group entirely manual so there's no automation and no intelligence.
You just update values and the auto scaling group performs the necessary scaling actions.
Normally though scaling policies are used together with auto scaling groups.
Scaling policies can update the desired capacity based on certain criteria for example CPU load and if the desired capacity is updated then as I've just mentioned it will provision or terminate instances and visually this is how it looks.
We have an auto scaling group and these run within a VPC across one or more subnets.
The configuration for EC2 instances is provided either using launch templates or launch configurations and then on the auto scaling group we specify a minimum value.
In this case 1 and this means there will always be at least one running EC2 instance.
In this case the cat pictures blog.
We can also set a desired capacity in this example 2 and this will add another instance if a desired capacity is set which is higher than the current number of instances.
If this is the case then instances are added.
Finally we could set the maximum size in this case to 4 which means that two additional instances could be provisioned but they won't immediately be because the desired capacity is only set to 2 and there are currently two running instances.
We could manually adjust the desired capacity up or down to add or remove instances which would automatically be built based on the launch template or launch configuration.
Alternatively we could use scaling policies to automate that process and scale in or out based on sets of criteria.
Architecturally auto scaling groups define where instances are launched.
They're linked to a VPC and subnets within that VPC are configured on the auto scaling group.
Whatever subnets are configured will be used to provision instances into.
When instances are provisioned there's an attempt to keep the number of instances within each availability zone even.
So in this case if the auto scaling group was configured with three subnets and the desired capacity was also set to three then it's probable each subnet would have one EC2 instance running within it but this isn't always the case.
The auto scaling group will try and level capacity where available.
Scaling policies are essentially rules.
Rules which you define which can adjust the values of an auto scaling group and there are three ways that you can scale auto scaling groups.
The first is not really a policy at all it's just to use manual scaling and I just talked about doing that.
This is where you manually adjust the values at any time and the auto scaling group handles any provisioning or termination that's required.
Next there's scheduled scaling which is great for sale periods where you can scale out the group when you know there's going to be additional demand or when you know a system won't be used so you can scale in outside of business hours.
Scheduled scaling adjusts the desired capacity based on schedules and this is useful for any known periods of high or low usage.
For the exam if you have known periods of usage then scheduled scaling is going to be a great potential answer.
Then we have dynamic scaling and there are three subtypes.
What they all have in common is they are rules which react to something and change the values on an auto scaling group.
The first is simple scaling and this well it's simple.
This is most commonly a pair of rules one to provision instances and one to terminate instances.
You define a rule based on a metric and an example of this is CPU utilization.
If the metric for example CPU utilization is above 50% then adjust the desired capacity by adding one and if the metric is below 50% then remove one from the desired capacity.
Using this method you can scale out meaning adding instances or scale in meaning terminating instances based on the value of a metric.
Now this metric isn't limited to CPU it can be many other metrics including memory or disk input output.
Some metrics need the cloud watch agent to be installed.
You can also use some metrics not on the EC2 instances.
For example maybe the length of an SQSQ which will cover elsewhere in the course or a custom performance metric within your application such as response time.
We also have stepped scaling which is similar but you define more detailed rules and this allows you to act depending on how out of normal the metric value is.
So maybe add one instance if the CPU usage is above 50% but if you have a sudden spike of load maybe add three if it's above 80% and the same could happen in reverse.
Step scaling allows you to react quicker the more extreme the change in conditions.
Step scaling is almost always preferable to simple except when your only priority is simplicity.
And then lastly we have target tracking and this takes a slightly different approach.
It lets you define an ideal amount of something say 40% aggregate CPU and then the group will scale as required to stay at that level provisioning or terminating instances to maintain that desired amount or that target amount.
Now not all metrics work for target tracking but some examples of ones that are supported are average CPU utilization, average network in, average network out and the one that's relevant to application load balances request count per target.
Now lastly there's a configuration on an auto scaling group called a cooldown period and this is a value in seconds.
It controls how long to wait at the end of a scaling action before doing another.
It allows auto scaling groups to wait and review chaotic changes to a metric and can avoid costs associated with constantly adding or removing instances.
Because remember there is a minimum billable period.
Since you'll build for at least the minimum time every time an instance is provisioned regardless of how long you use it for.
Now auto scaling groups also monitor the health of instances that they provision.
By default this uses the EC2 status checks.
So if an EC2 instance fails EC2 detects this passes this on to the auto scaling group and then the auto scaling group terminates the EC2 instance then it provisions a new EC2 instance in its place.
This is known as self healing and it will fix most problems isolated to a single instance.
The same would happen if we terminated an instance manually.
The auto scaling group would simply replace it.
Now there's a trick with EC2 and auto scaling groups.
If you create a launch template which can automatically build an instance then create an auto scaling group using that template.
Set the auto scaling group to use multiple subnets in different availability zones.
Then set the auto scaling group to use a minimum of one, a maximum of one and a desired of one.
Then you have simple instance recovery.
The instance will recover if it's terminated or if it fails.
And because auto scaling groups work across availability zones the instance can be reprovisioned in another availability zone if the original one fails.
It's cheap, simple and effective high availability.
Now auto scaling groups are really cool on their own but their real power comes from their ability to integrate with load balancers.
Take this example that Bob is browsing to the cat blog that we've been using so far and he's now connecting through a load balancer.
And the load balancer has a listener configured for the blog and points at a target group.
Instead of statically adding instances or other resources to the target group then you can use an auto scaling group configured to integrate with the target group.
As instances are provisioned within the auto scaling group then they're automatically added to the target group of that load balancer.
And then as instances are terminated by the auto scaling group then they're removed from that target group.
This is an example of elasticity because metrics which measure load on a system can be used to adjust the number of instances.
These instances are effectively added as load balancer targets and any users of the application because they access via the load balancer are abstracted away from the individual instances and they can use the capacity added in a very fluid way.
And what's even more cool is that the auto scaling group can be configured to use the load balancer health checks rather than EC2 status checks.
Application load balancer checks can be much richer.
They can monitor the state of HTTP or HTTPS requests.
And because of this they're application aware which simple status checks which EC2 provides are not.
Be careful though you need to use an appropriate load balancer health check.
If your application has some complex logic within it and you're only testing a static HTML page then the health check could respond as okay even though the application might be in a failed state.
And the inverse of this if your application uses databases and your health check checks a page with some database access requirements well if the database fails then all of your health checks could fail meaning all of your EC2 instances will be terminated and reprovisioned when the problem is with the database not the instances.
And so you have to be really careful when it comes to setting up health checks.
Now the next thing I want to talk about is scaling processes within an auto scaling group.
So you have a number of different processes or functions performed by the auto scaling group.
And these can be set to either be suspended or they can be resumed.
So first we've got launch and terminate and if launch is set to suspend then the auto scaling group won't scale out if any alarms or schedule actions take place.
And the inverse is if terminate is set to suspend then the auto scaling group will not terminate any instances.
We've also got add to load balancer and this controls whether any instances provisioned are added to the load balancer.
Next we've got alarm notification and this controls whether the auto scaling group will react to any cloud watch alarms.
We've also got az rebalance and this controls whether the auto scaling group attempts to redistribute instances across availability zones.
We've got health check and this controls whether instance health checks across the entire group are on or off.
We've also got replace unhealthy which controls whether the auto scaling group will replace any instances marked as unhealthy.
We've got scheduled actions which controls whether the auto scaling group will perform any scheduled actions or not.
And then in addition to those you can set a specific instance to either be standby or in service.
And this allows you to suspend any activities of the auto scaling group on a specific instance.
So this is really useful if you need to perform maintenance on one or more EC2 instances you can set them to standby and that means they won't be affected by anything that the auto scaling group does.
Now before we finish I just want to talk about a few final points and these are really useful for the exam.
Auto scaling groups are free.
The only costs are for the resources created by the auto scaling group and to avoid excessive costs use cooldowns within the auto scaling group to avoid rapid scaling.
To be cost effective you should also think about using more smaller instances because this means you have more granular control over the amount of compute and therefore costs that are incurred by your auto scaling group.
So if you have two larger instances and you need to add one that's going to cost you a lot more than if you have 20 smaller instances and only need to add one.
Smaller instances mean more granularity which means you can adjust the amount of compute in smaller steps and that makes it a more cost effective solution.
Now auto scaling groups are used together with application load balances for elasticity so the load balancer provides the level of abstraction away from the instances provisioned by the auto scaling group so together they're used to provision elastic architectures.
And lastly an auto scaling group controls the when and the where so when instances are launched and which subnets they're launched into.
Launch templates or launch configurations define the what so what instances are launched and what configuration those instances have.
Now at this point that's everything I wanted to cover in this lesson it's been a huge amount of theory for one lesson but these are really essential concepts that you need to understand for the exam.
So go ahead and complete this lesson and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover two features of EC2, launch configurations and launch templates.
Now they both perform a similar thing, but launch templates came after launch configurations and include extra features and capabilities.
Now I want this lesson to be fairly brief, launch configurations and launch templates are actually relatively easy to understand.
What we're going to be covering in the next lesson is auto scaling groups which utilize either launch configurations or launch templates.
So I'll try to keep this lesson as focused as possible, but let's jump in and get started.
Launch configurations and launch templates at a high level perform the same task.
They allow the configuration of EC2 instances to be defined in advance.
Their documents which let you configure things like the AMI to use, the instance type and size, the configuration of the storage which instances use, and the key pair which is used to connect to that instance.
They also let you define the networking configuration and security groups that an instance uses.
They let you configure the user data which is provided to the instance and the IAM role which is attached to the instance used to provide the instance with permissions.
Everything which you usually define at the point of launching an instance, you can define in launch configurations and launch templates.
Now both of these are not editable.
You define them once and that configuration is locked.
Launch templates as the newer of the two allow you to have versions, but for launch configurations versions aren't available.
Launch templates also have additional features or allow you to control features of the newer types of instances.
Things like T2 or T3 unlimited CPU options, placement groups, capacity reservations, and things like elastic graphics.
AWS recommend using launch templates at this point in time because they're a super set of launch configuration.
They provide all of the features that launch configuration provides and more.
Architecturally, launch templates also offer more utility.
Launch configurations have one use.
They're used as part of auto scaling groups which we'll be talking about later in this section.
Auto scaling groups offer automatic scaling for EC2 instances and launch configurations provide the configuration of those EC2 instances which will be launched by auto scaling groups.
And as a reminder, they're not editable nor do they have any versioning capability.
If you need to adjust the configuration inside a launch configuration, you need to create a new one and use that new launch configuration.
Now launch templates, they can also be used for the same thing.
So providing EC2 configuration which is used within auto scaling groups.
But in addition, they can also be used to launch EC2 instances directly from the console or the CLI.
So good old Bob can define his instance configuration in advance and use that when launching EC2 instances.
Now you'll get the opportunity to create and use launch templates in the series of demo lessons later in this section.
For now, I just wanted to cover all of the theory back to back so you can appreciate how it all fits together.
That's everything though that I wanted to cover in this lesson about launch configurations and launch templates.
In the next lesson, I'll be talking about auto scaling groups which are closely related.
Both of them work together to allow EC2 instances to scale in response to the incoming load on a system.
But for now, go ahead and finish this video and when you're ready, I look forward to speaking to you in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson, I want to cover application and network load balances in a little bit more detail.
It's critical for the exam that you understand when to pick application load balances and when to pick network load balances.
They're both used for massively different situations.
Now we do have a lot to cover, so let's jump in and get started.
I want to start by talking about consolidation of load balances.
Historically, when using classic load balances, you connected instances directly to the load balancer or you integrated an auto scaling group directly with that load balancer, an architecture which looked something like this.
So a single domain name, categor.io using a single classic load balancer and this has attached a single SSL certificate for that domain and then an auto scaling group is attached to that and the classic load balancer distributes connections over those instances.
The problem is that this doesn't scale because classic load balancers don't support SNI and you can't have multiple SSL certificates per load balancer and so every single unique HTTPS application that you have requires its own classic load balancer and this is one of the many reasons that classic load balancers should be avoided.
With this example, we have Catergram and Dogagram and both of these are HTTPS applications and the only way to use these is to have two different classic load balancers.
Compare this to the same application architecture so both of these applications, Catergram and Dogagram, only this time using a single application load balancer.
So this is handling both applications, Catergram and Dogagram.
This time we can use listener based rules and I'll talk about what these do later in the lesson but each of these listener based rules can have an SSL certificate handling HTTPS for both domains.
Then we can have host based rules which direct incoming connections at multiple target groups which forward these on to multiple auto scaling groups.
This is a two to one consolidation so we've halved the number of load balancers required to deliver these two different applications.
But imagine how this would look if we had a hundred legacy applications and each of these used a classic load balancer.
Moving from version one to version two offers significant advantages and one of those is consolidation.
So now I just want to focus on some of the key points about application load balancers.
So these are things which are specific to the version two or application load balancer.
First, it's a true layer seven load balancer and it's configured to listen on either HTTP or HTTPS protocols.
So these are layer seven application protocols and an application load balancer understands both of these and can interpret information carried using both of those protocols.
Now the flip side to this is that the application load balancer can't understand any other layer seven protocols.
So things such as SMTP, SSH or any custom gaming protocols are not supported by a layer seven load balancer such as the application load balancer and that's important to understand.
Now additionally, the application load balancer has to listen using HTTP or HTTPS listeners.
It cannot be configured to directly listen using TCP, UDP or TLS.
And that does have some important limitations and considerations that you need to be aware of.
And I'll talk about that later on in this lesson.
Now because it's a layer seven load balancer, it can understand layer seven content.
So things like the type of the content, any cookies which are used by your application, custom headers, user location and application behavior.
The layer seven load balancer, so the application load balancer is able to inspect all of the layer seven application protocol information and make decisions based on that information.
And that's something that the network load balancer cannot do.
It has to be a layer seven load balancer so the application load balancer to understand all of these individual components.
Now an important consideration about the application load balancer is that any incoming connections, so HTTP or HTTPS, and remember HTTPS is just HTTP, which is transiting using SSL or TLS.
In all of these cases, whichever type of connection is used, it's terminated on the application load balancer.
And this means that you can't have an unbroken SSL connection from your customer through to your application instances.
It's always terminated on the load balancer and then a new connection is made from the load balancer through to the application.
This is important because this is the type of thing that matters to security teams.
And if your business operates in a fairly strict security environment, then this might well be very important.
And in some cases, it can exclude using an application load balancer.
So it can't do end-to-end unbroken SSL encryption between a client and your application instances.
And it also means that all application load balancers which use HTTPS must have SSL certificates installed on that load balancer.
Because the connection has to be terminated on the load balancer and then a new connection made to the instances.
Now application load balancers are also slower than network load balancers because there are additional levels of the networking stack which need to be processed.
So the more levels of the networking stack which are involved, the more complexity the slower the processing.
So if you're facing any exam questions which are really strict on performance, then you might want to look at network load balancers rather than application load balancers.
A benefit though that application load balancers offer is because they're layer seven, then they can evaluate the application health at layer seven.
So in addition to just testing for a successful network connection, they can actually make an application layer request to the application to ensure that it's functioning correctly.
Now application load balancers also have the concept of rules and rules direct connections which arrive at a listener.
So if you make a connection to a load balancer, what the load balancer does with that connection is determined by any rules and rules are processed in priority order.
You can have many rules which might affect a given set of traffic and they're processed in priority order.
And the last one to be processed is the default rule which is a catch all.
But you can add additional rules and each of these can have conditions.
Now things that you can have inside the conditions of a rule include checking for things like host headers, HTTP headers, HTTP request methods, path patterns, query strings and even source IP.
So these rules could take different actions depending on which domain name you're asking for, category or dogogram.
They can perform different actions based on which path you're looking for.
So images or API, they can even take different decisions based on query string and even make different decisions based on the source IP address of any customers connecting to that application load balancer.
Now rules can also have actions.
These are the things that the rules do with the traffic.
So they can forward that traffic through to a target group.
They can redirect traffic at something else.
So maybe another domain name.
They can provide a fixed HTTP response, a certain error code or a certain success code and they can even perform certain types of authentication.
So using open ID or using Cognito.
Now this is how it looks visually.
This is a simple application load balancer deployment, a single domain, category.io.
We've got one host based rule with an attached SSL certificate and the rule is using host header as a condition and forward as an action.
So it's forwarding any connections for Cognito.io to the target group for the Cognito application.
But what if you want additional functionality?
Well, let's take a look.
First, let's imagine that we want to use the same application load balancer for a corporate client who's trying to access Cognito.io.
Maybe users of Bowtie Incorporated who use the 1.3.3.7 IP address are attempting to access our load balancer and we want to present them with an alternative version of the application.
Well, we can easily handle that by defining a listener rule but this time the condition will be the source IP address of 1.3.3.7.
Now this rule would have an action to forward traffic at a separate target group, an auto scaling group which handles a second set of instances dedicated for this corporate client because the application load balancer is a layer seven device.
It can see inside the Htt protocol and make decisions based on anything within that protocol or anything up to layer seven.
Now it's worth pointing out in addition that because this is a layer seven load balancer, the connection from the load balancer to the instances for target group two will be a separate set of connections.
And that's why it's in a slightly different color of purple.
The HTTP connection from our enterprise users are terminated on the load balancer and there's a separate set of connections through to our application instances.
There's no option to pass through the encrypted connection to the instances.
It has to be terminated.
Now this might not matter but it's something that you need to know for the exam.
If you have to forward encrypted connections through to the instances without terminating them on the load balancer, then you need to use a network load balancer.
Now because it's a layer seven load balancer, you can also use rules which work on layer seven elements of the protocol.
You could route based on paths or anything else in the HTTP protocol such as headers.
And you can also redirect traffic from a HTTP level.
An example, let's say that this ALB was also handling traffic for DogoGram.
Well, you could define a rule which matched the DogoGram.io domain name and as an action instead of forwarding, you could configure a redirect towards catagram.io, the obviously superior website.
And these are just a small subset of the features which are available within the application load balancer because it's layer seven, you can pretty much perform routing decisions based on anything which you can observe at layer seven and that makes it a really flexible product.
Before we finish this lesson, let's take a quick look at network load balancers.
Network load balancers function at layer four.
So there are layer four device which means that they can interpret TCP, TLS and UDP protocols as well as TCP and UDP.
But the flip side of this is that they have no visibility or understanding of HTTP or HTTPS.
And this means that they can't interpret headers, they can't see or interpret cookies and they've got no concept of session stickiness from a HTTP perspective because that uses cookies which the network load balancer cannot interpret because that's a layer seven entity.
Now network load balancers are really, really, really fast.
They can handle millions of requests per second and have around 25% of the latency of application load balancers.
And again, this is because they don't have to deal with any of the computationally heavy upper layers of the networking stack.
They only have to deal with layer four.
This also means that they're ideal to deal with any non-HTTP or HTTPS protocols.
So examples might be SMTP email, SSH, game servers which don't use either of the web protocols and any financial applications which are not HTTP or HTTPS.
So if you see any exam questions which talk about things which aren't web or secure web and don't use HTTP or HTTPS, then you should probably default to network load balancers.
One of the downsides of not being aware of layer seven is that health checks which are performed by network load balancers only check ICMP and basic TCP handshaking.
So they're not application aware.
You can't do detailed health checking with network load balancers.
A benefit of network load balancers is that it can be allocated with static IP addresses which is really useful for white listing if you have any corporate clients.
So corporate clients can decide to white list the IPs of network load balancers and allow them to progress straight through their firewall.
And this is great for any strict security environments that you need to operate in.
Another benefit is that they can forward TCP straight through to instances.
Now, if you're familiar with the networking stack, how this works is that upper layers build on layers below them.
So because the network load balancer doesn't understand HTTP or HTTPS, then you can configure a listener to accept TCP only traffic and then forward that through to instances.
And what that means is that any of the layers that are built on top of TCP are not terminated on the load balancer.
And so they're not interrupted.
And this means that you can forward unbroken channels of encryption directly from your clients through to your application instances.
And this is a really important thing to remember for the exam.
So network load balancers and TCP listeners is how you can do unbroken end-to-end encryption.
Network load balancers are also used for private link to provide services to other VPCs.
And this is another really important thing to remember for the exam.
Now, just to finish up this lesson, I want to do a quick comparison of a number of facts that you can use to decide between network load balancing and application load balancing.
And I find it easier to remember the things which you should be using a network load balancer for.
And then if the scenario is none of those, then you can default to using an application load balancer.
So let's step through the reasons why you might choose to use a network load balancer.
Well, the first one is the one we've just discussed.
If you want to perform unbroken encryption between a client and your instances, then use network load balancers.
If you need to use static IPs for white listing, then again, network load balancers.
If you want the absolute best performance, so millions of requests per second and low latency, then again, network load balancers.
If you need to operate on protocols which are not HTTP or not HTTPS, then you need to use network load balancers.
And then finally, if you have any requirement which involves private link, then you need to use network load balancers.
And for anything else, default to using application load balancers because the additional functionality provided by these devices is often really valuable to most scenarios.
Now, with that being said, that's everything I wanted to cover about application load balancers and network load balancers for the exam.
Go ahead and complete this video.
And when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
This time, we have a typical multi-tiered application.
We start with a VPC and inside that two availability zones.
On the left, we also have an Internet Facing Low Balancer.
Then we have a Web Instance Auto Scaling Group providing the front-end capability of the application.
Then we have another Low Balancer, this time an internal Low Balancer, with only private IP addresses allocated to the nodes.
Next, we have an Auto Scaling Group for the application instances.
These are used by the web servers for the application.
Then on the right, we have a pair of database instances.
In this case, let's assume they're both Aurora database instances.
So we have three tiers, Web, Application and Database.
Now, without Load Balancers, everything would be tied to everything else.
Our user, Bob, would have to communicate with a specific instance in the web tier if this failed or scaled, then Bob's experience would be disrupted.
The instance that Bob is connected to would itself connect to a specific instance in the application tier, and if that instance failed or scaled, then again, Bob's experience would be disrupted.
What we can do to improve this architecture is to put Load Balancers between the application tiers to abstract one tier from another.
And how this changes things is that Bob actually communicates with an ELB node, and this ELB node sends this connection through to a particular web server.
But Bob has no knowledge of which web server he's actually connected to because he's communicating via a Load Balancer.
If instances are added or removed, then he would be unaware of this fact because he's abstracted away from the physical infrastructure by the Load Balancer.
Now, the web instance that Bob is using, it would need to communicate with an instance of the application tier, and it would do this via an internal Load Balancer.
And again, this represents an abstraction of communication.
So in this case, the web instance that Bob is connected to isn't aware of the physical deployment of the application tier.
It's not aware of how many instances exist, nor which one it's actually communicating with.
And then at this point, to complete this architecture, the application server that's being used would use the database tier for any persistent data storage needs.
Now, without using Load Balancers with this architecture, all the tiers are tightly coupled together.
They need an awareness of each other.
Bob would be connecting to a specific instance in the web tier.
This would be connecting to a specific instance in the application tier.
And all of these tiers would need to have an awareness of each other.
Load Balancers remove some of this coupling.
They loosen the coupling.
And this allows the tiers to operate independently of each other because of this abstraction.
Crucially, it allows the tiers to scale independently of each other.
In this case, for example, it means that if the load on the application tier increased beyond the ability of two instances to service that load, then the application tier could grow independently of anything else, in this case scaling from two to four instances.
The web tier could continue using it with no disruption or reconfiguration because it's abstracted away from the physical layout of this tier, because it's communicating via a Load Balancer.
It has no awareness of what's happening within the application tier.
Now, we're going to talk about these architectural implications in depth later on in this section of the course.
But for now, I want you to be aware of the architectural fundamentals.
And one other fundamental that I want you to be completely comfortable with is cross zone load balancing.
And this is a really essential feature to understand.
So let's look at an example visually.
Bob accessing a WordPress blog, in this case, The Best Cats.
And we can assume because this is a really popular and well-architected application that it's going to be using a load balancer.
So Bob uses his device and browsers to the DNS name for the application, which is actually the DNS name of the load balancer.
We know now that a load balancer by default has at least one node per availability zone that it's configured for.
So in this example, we have a cut down version of the Animals for Life VPC, which is using two availability zones.
So in this case, an application load balancer will have a minimum of two nodes, one in each availability zone.
And the DNS name for the load balancer will direct any incoming requests equally across all of the nodes of the load balancer.
So in this example, we have two nodes, one in each availability zone.
Each of these nodes will receive a portion of incoming requests based on how many nodes there are.
For two nodes, it means that each node gets 100% divided by two, which represents 50% of the load that's directed at each of the load balancer nodes.
Now, this is a simple example.
In production situations, you might have more availability zones being used, and at higher volume, so higher throughput, you might have more nodes in each availability zone.
But this example keeps things simple.
So however much incoming load is directed at the load balancer DNS name, each of the load balancer nodes will receive 50% of that load.
Now, originally load balancers were restricted in terms of how they could distribute the connections that they received.
Initially, the way that it worked is that each load balancer node could only distribute connections to instances within the same availability zone.
Now, this might sound logical, but consider this architecture where we have four instances in availability zone A and one instance in availability zone B.
This would mean that the load balancer node in availability zone A would split its incoming connections across all instances in that availability zone, which is four ways.
And the node in availability zone B would also split its connections up between all the instances in the same availability zone.
But because there's only one, that would mean 100% of its connections to the single EC2 instance.
Now, with this historic limitation, it means that node A would get 50% of the overall connections and would further split this down four ways, which means each instance would be allocated 12.5% of the overall load.
Node B would also receive 50% of the overall load.
And normally it would split that down across all instances also in that same availability zone.
But because there's only one, that one instance would get 100% of that 50%.
So all of the instances in availability zone A would receive 12.5% of the overall load and the instance in availability zone B would receive 50% of the overall load.
So this represents a substantially uneven distribution of the incoming load because of this historic limitation of how load balancer nodes could distribute traffic.
And the fix for that was a feature known as cross zone load balancing.
Now, the name gives away what this does.
It simply allows every load balancer node to distribute any connections that it receives equally across all registered instances in all availability zones.
So in this case, it would mean that the node in availability zone A could distribute connections to the instance in AZB and the node in AZB could distribute connections to instances in AZA.
And this represents a much more even distribution of incoming load.
And this is known as cross zone load balancing, the ability to distribute or load balance across availability zones.
Now, this is a feature which originally was not enabled by default.
But if you're deploying an application load balancer, this comes enabled as standard.
But you still need to be aware of it for the exam because it's often posed as a question where you have a problem, an uneven distribution of load, and you need to fix it by knowing that this feature exists.
So it's really important that you understand it for the exam.
So before we finish up with this lesson, I just want to reconfirm the most important architectural points about elastic load balancers.
If there are only a few things that you take away from this lesson, these are the really important points.
Firstly, when you provision an elastic load balancer, you see it as one device which runs in two or more availability zones, specifically one subnet in each of those availability zones.
But what you're actually creating is one elastic load balancer node in one subnet in each availability zone that that load balancer is configured in.
You're also creating a DNS record for that load balancer which spreads the incoming requests over all of the active nodes for that load balancer.
Now you start with a certain number of nodes, let's say one node per availability zone, but it will scale automatically if additional load is placed on that load balancer.
Remember by default, cross-own load balancing means that nodes can distribute requests across to other availability zones, but historically this was disabled, meaning connections potentially would be relatively imbalanced.
But for application load balancers, cross-own load balancing is enabled by default.
Now load balancers come in two types.
Internet facing, which just means that the nodes are allocated with public IP version 4 addresses.
That's it.
It doesn't change where the load balancer is placed, it just influences the IP addressing for the nodes of that load balancer.
Internal load balancers are the same, only their nodes are only allocated private IP addresses.
Now one of the most important things to remember about load balancers is that an internet facing load balancer can communicate with public instances or private instances.
EC2 instances don't need public IP addressing to work with an internet facing load balancer.
An internet facing load balancer has public IP addresses on its nodes, it can accept connections from the public internet and balance these across both public and private EC2 instances.
That's really important to understand for the exam, so you don't actually need public instances to utilize an internet facing load balancer.
Now load balancers are configured via listener configuration, which as the name suggests controls what those load balancers listen to.
And again, I'll be covering this in much more detail later on in this section of the course.
And then lastly, remember the confusing part about load balancers.
They require eight or more free IP addresses per subnet that they get deployed into.
Strictly speaking, this means that a /28 subnet would be enough, but the AWS documentation suggests a /27 in order to allow scaling.
For now, that's everything that I wanted to cover, so go ahead and complete this lesson.
And then when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson, I want to talk about the architecture of elastic load balancers.
Now I'm going to be covering load balancers extensively in this part of the course.
So I want to use this lesson as a sort of foundation.
I'm going to cover the high level logical and physical architecture of the product and either refresh your memory on some things or introduce some of the finer points of load balancing for the first time.
And both of these are fine.
Now, before we start, it's the job of a load balancer to accept connections from customers and then to distribute those connections across any registered backend compute.
It means that the user is abstracted away from the physical infrastructure.
It means that the amount of infrastructure can change.
So increase or decrease in number without affecting customers.
And because the physical infrastructure is abstracted, it means that infrastructure can fail and be repaired, all of which is hidden from customers.
So with that quick refresher done, let's jump in and get started covering the architecture of elastic load balancers.
Now I'm going to be stepping through some of the key architectural points visually.
So let's start off with a VPC, which uses two availability zones, AZA and AZB.
And then in those availability zones, we've got a few subnets, two public and some private.
Now let's add a user, Bob, together with a pair of load balancers.
Now, as I just mentioned, it's the job of a load balancer to accept connections from a user base and then distribute those connections to backend services.
For this example, we're going to assume that those services are long running compute or EC2, but as you'll see later in this section, that doesn't have to be the case.
Elastic load balancers, specifically application load balancers, support many different types of compute services.
It's not only EC2.
Now, when you provision a load balancer, you have to decide on a few important configuration items.
The first, you need to pick whether you want to use IP version four only or dual stack.
And dual stack just means using IP version four and the newer IP version six.
You also need to pick the availability zones which the load balancer will use, specifically you're picking one subnet in two or more availability zones.
Now, this is really important because this leads in to the architecture of elastic load balancers, so how they actually work.
Based on the subnets that you pick inside availability zones, when you provision a load balancer, the product places into these subnets one or more load balancer nodes.
So what you see as a single load balancer object is actually made up of multiple nodes and these nodes live within the subnets that you pick.
So when you're provisioning a load balancer, you need to select which availability zones it goes into.
And the way you do this is by picking one and one only, subnet in each of those availability zones.
So in the example that's on screen now, I've picked to use the public subnet in availability zone A and availability zone B and so the product has deployed one or more load balancer nodes into each of those subnets.
Now when a load balancer is created, it actually gets created with a single DNS record.
It's an A record and this A record actually points at all of the elastic load balancer nodes that get created with the product.
So any connections that are made using the DNS name of the load balancer are actually made to the nodes of that load balancer.
The DNS name resolves to all of the individual nodes.
It means that any incoming requests are distributed equally across all of the nodes of the load balancer and these nodes are located in multiple availability zones and they scale within that availability zone.
And so they're highly available.
If one node fails, it's replaced.
If the incoming load to the load balancer increases, then additional nodes are provisioned inside each of the subnets that the load balancer is configured to use.
Now another choice that you need to make when creating a load balancer, and this is really important for the exam, is to decide whether that load balancer should be internet facing or whether it should be internal.
This choice, so whether to use internet facing or internal, controls the IP addressing for the load balancer nodes.
If you pick internet facing, then the nodes of that load balancer are given public addresses and private addresses.
If you pick internal, then the nodes only have private IP addresses.
So that's the only difference.
Otherwise, they're the same architecturally, they have the same nodes and the same load balancer features.
The only difference between internet facing and internal is whether the nodes are allocated public IP addresses.
Now the connections from our customers which arrive at the load balancer nodes, the configuration of how that's handled is done using a listener configuration.
As the name suggests, this configuration controls what the load balancer is listening to.
So what protocols and ports will be accepted at the listener or front side of the load balancer?
Now there's a dedicated lesson coming up later in this section which focuses specifically on the listener configuration.
At this point, I just wanted to introduce it.
So at this point, Bob has initiated connections to the DNS name associated with the load balancer.
And that means that he's made connections to load balancer nodes within our architecture.
Now at this point, the load balancer nodes can then make connections to instances that are registered with this load balancer.
And the load balancer doesn't care about whether those instances are public EC2 instances, so allocated with a public IP address or their private EC2 instances.
So instances which reside in a private subnet and only have private addressing.
I want to keep reiterating this because it's often a point of confusion for students who are new to load balancers.
An internet-facing load balancer, and remember this means that it has nodes that have public addresses so it can be connected to from the public internet, it can connect both to public and private EC2 instances.
Instances that are used do not have to be public.
Now this matters because in the exam when you face certain questions which talk about how many subnets or how many tiers are required for an application, it does test your knowledge that an internet-facing load balancer does not need private or public instances.
It can work with both of those.
The only requirement is that load balancer nodes can communicate with the back-end instances.
And this can happen whether the instances have allocated public addressing or whether they're private only instances.
The important thing is that if you want a load balancer to be reachable from the public internet, it has to be an internet-facing load balancer because logically it needs to be allocated with public addressing.
Now load balancers in order to function need eight or more free IP addresses in the subnets that they're deployed into.
Now strictly speaking, this means a /28 subnet, which provides a total of 16 IP addresses but minus the five reserved by AWS, this leaves 11 free per subnet.
But AWS suggests that you use a /27 or larger subnet to deploy an elastic load balancer in order that it can scale.
Keep in mind that strictly speaking, both a /28 and /27 subnets are both correct in their own ways to represent the minimum subnet size for a load balancer.
AWS do suggest in their documentation that you need a /27, but they also say you need a minimum of eight free IP addresses.
Now logically, a /28, which leaves 11 free, won't give you the room to deploy a load balancer and back end instances.
So in most cases, I try to remember /27 as the correct value for the minimum for a load balancer.
But if you do see any questions which show a /28 and don't show a /27, then /28 is probably the right answer.
Now internal load balancers are architecturally just like internet facing load balancers, except they only have private IPs allocated to their nodes.
And so internal load balancers are generally used to separate different tiers of applications.
So in this example, our user Bob connects via the internet facing load balancer to the web server.
And then this web server can connect to an application server via an internal load balancer.
And this allows us to separate application tiers and allow for independent scaling.
So let's look at this visually.
Okay, so this is the end of part one of this lesson.
It was getting a little bit on the long side and I wanted to give you the opportunity to take a small break, maybe stretch your legs or make a coffee.
Now part two will continue immediately from this point.
So go ahead, complete this video.
And when you're ready, I'll look forward to you joining me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to spend a few minutes just covering the evolution at the Elastic Load Balancer product.
It's important for the exam and real world usage that you understand its heritage and its current state.
Now this is going to be a super quick lesson because most of the detail I'm going to be covering in dedicated lessons which are coming up next in this section of the course.
So let's jump in and take a look.
Now there are currently three different types of Elastic Load Balancers available within AWS.
If you see the term ELB or Elastic Load Balancers then it refers to the whole family, all three of them.
Now the load balancers are split between version 1 and version 2.
You should avoid using the version 1 load balancer at this point and aim to migrate off them onto version 2 products which should be preferred for any new deployments.
There are no scenarios at this point where you would choose to use a version 1 load balancer versus one of the version 2 types.
Now the load balancer product started with the classic load balancer known as CLB which is the only version 1 load balancer and this was introduced in 2009.
So it's one of the older AWS products.
Now classic load balancers can load balance HTTP and HTTPS as well as lower level protocols but they aren't really layer 7 devices.
They don't really understand HTTP and they can't make decisions based on HTTP protocol features.
They lack much of the advanced functionality of the version 2 load balancers and they can be significantly more expensive to use.
One common limitation is that classic load balancers only support one SSL certificate per load balancer which means for larger deployments you might need hundreds or thousands of classic load balancers and these could be consolidated down to a single version 2 load balancer.
So I can't stress this enough for any questions or any real world situations you should default to not using classic load balancers.
Now this brings me on to the new version 2 load balancers.
The first is the application load balancer or ALB and these are truly layer 7 devices so application layer devices.
They support HTTP, HTTPS and the web socket protocols.
They're generally the type of load balancer that you'd pick for any scenarios which use any of these protocols.
There's also network load balancers or NLBs which are also version 2 devices but these support TCP, TLS which is a secure form of TCP and UDP protocols.
So network load balancers are the type of load balancer that you would pick for any applications which don't use HTTP or HTTPS.
For example if you wanted to load balance email servers or SSH servers or a game which used a custom protocol so didn't use HTTP or HTTPS then you would use a network load balancer.
In general version 2 load balancers are faster and support target groups and rules which allow you to use a single load balancer for multiple things or handle the load balancing different based on which customers are using it.
Now I'm going to be covering the capabilities of each of the version 2 load balancers separately as well as talking about rules but I wanted to introduce them now as a feature.
Now for the exam you really need to be able to pick between network load balancers or application load balancers for a specific situation so that's what I want to work on over the coming lessons.
For now though this has just been an introduction lesson that talks about the evolution of these products and that's everything that I wanted to cover in this lesson so go ahead complete lesson and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to talk about the regional and global AWS architecture.
So let's jump in and get started.
Now throughout this lesson, I want you to think about an application that you're familiar with, which is global.
And for this example, I'll be talking about Netflix, because this is an application that most people have at least heard of.
Now, Netflix can be thought of as a global application, but it's also a collection of smaller regional applications which make up the Netflix global platform.
So these are discrete blocks of infrastructure which operate independently and duplicated across different regions around the world.
As a solutions architect, when we're designing solutions, I find that there are three main types of architectures.
Small scale architectures which will only ever exist in one region or one country.
Then we have systems which also exist in one region or country, but where there's a DR requirement, so if that region fails for some reason, then it fails over to a second region.
And then lastly, we have systems that operate within multiple regions and need to operate through failure in one or more of those regions.
Now, depending on how you architect systems, there are a few major architectural components which will map on to AWS products and services.
So at a global level, first we have global service location and discovery.
So when you type Netflix.com into your browser, what happens?
How does your machine discover where to point at?
Next, we've got content delivery.
So how does the content or data for an application get to users globally?
Are their pockets of storage distributed globally or is it pulled from a central location?
Lastly, we've got global health checks and failover.
So detecting if infrastructure in one location is healthy or not and moving customers to another country as required.
So these are the global components.
Next, we have regional components starting with the regional entry point.
And then we have regional scaling and regional resilience and then the various application services and components.
So as we go through the rest of the course, we're going to be looking at specific architectures.
And as we do, I want you to think about them in terms of global and regional components, which parts can be used for global resilience and which parts are local only.
So let's take a look at this visually starting with the global elements.
So let's keep using Netflix as an example.
And let's say that we have a group of users who are starting to settle down for the evening and want to watch the latest episode of Ozarks.
So the Netflix client will use DNS for the initial service discovery.
Netflix will have configured the DNS to point at one or more service endpoints.
Let's keep things simple at this point and assume that there is a primary location for Netflix in a US region of AWS, maybe US East One.
And this will be used as the primary location.
And if this fails, then Australia will be used as a secondary.
Now, another valid configuration would be to send customers to their nearest location, in this case, sending our TV fans to Australia.
But in this case, let's just assume we have a primary and a secondary region.
So this is the DNS component of this architecture and Route 53 is the implementation within AWS.
Now, because of its flexibility, it can be configured to work in any number of ways.
The key thing for this global architecture, though, is that it has health checks.
So it can determine if the US region is healthy and direct all sessions to the US while this is the case, or direct sessions to Australia if there are problems with the primary region.
Now, regardless of where infrastructure is located, a content delivery network can be used at the global level.
This ensures that content is cached locally as close to customers as possible, and these cache locations are located globally, and they all pull content from the origin location as required.
So just to pause here briefly, this is a global perspective.
The function of the architecture at this level is to get customers through to a suitable infrastructure location, making sure any regional failures are isolated and sessions moved to alternative regions.
It attempts to direct customers at a local region, at least if the business has multiple locations, and lastly, it attempts to improve caching using a content delivery network such as CloudFront.
If this part of our architecture works well, customers will be directed towards a region that has infrastructure for our application, and let's assume this region is one of the US ones.
At this point, the traffic is entering one specific region of the AWS infrastructure.
Depending on the architecture, this might be entering into a VPC or using public space AWS services, but in either case, now we're architecturally zoomed in, and so we have to think about this architecture now in a regional sense.
The most effective way to think about systems architecture is a collection of regions making up a whole.
If you think about AWS products and services, very few of them are actually global.
Most of them run in a region, and many of those regions make up AWS.
Now it's efficient to think in this way, and it makes designing a large platform much easier.
For the remainder of this course, we're going to be covering architecture in depth, so how things work, how things integrate, and what features products provide.
Now the environments that you will design will generally have different tiers, and tiers in this context are high level groupings of functionality or different zones of your application.
Initially, communications from your customers will generally enter at the web tier.
Generally, this will be a regional based AWS service such as an application load balancer or API gateway, depending on the architecture that the application uses.
The purpose of the web tier is to act as an entry point for your regional based applications or application components.
It abstracts your customers away from the underlying infrastructure.
It means that the infrastructure behind it can scale or fail or change without impacting customers.
Now the functionality provided to the customer via the web tier is provided by the compute tier, using services such as EC2, Lambda, or containers which use the elastic container service.
So in this example, the load balancer will use EC2 to provide compute services through to our customers.
Now we'll talk throughout the course about the various different types of compute services which you can and should use for a given situation.
The compute tier though will consume storage services, another part of all AWS architectures, and this tier will use services such as EBS, which is the elastic block store, EFS, which is the elastic file system, and even S3 for things like media storage.
You'll also find that many global architectures utilize CloudFront, the global content delivery network within AWS, and CloudFront is capable of using S3 as an origin for media.
So Netflix might store movies and TV shows on S3 and these will be cached by CloudFront.
Now all of these tiers are separate components of an application and can consume services from each other and so CloudFront can directly access S3 in this case to fetch content for delivery to a global audience.
Now in addition to file storage, most environments require data storage and within AWS this is delivered using products like RDS, Aurora, DynamoDB and Redshift for data warehousing.
But in order to improve performance, most applications don't directly access the database.
Instead, they go via a caching layer, so products like ElastiCache for general caching or DynamoDB Accelerator known as DAX when using DynamoDB.
This way, reads to the database can be minimized.
Applications will instead consult the cache first and only if the data isn't present in the cache will the database be consulted and the contents of the cache updated.
Now caching is generally in memory, so it's cheap and fast.
Databases tend to be expensive based on the volume of data required versus cache and normal data storage.
So where possible, you need to offload reads from the database into the caching layer to improve performance and reduce costs.
Now lastly, AWS have a suite of products designed specifically to provide application services.
So things like Kinesis, Step Functions, SQS and SNS, all of which provide some type of functionality to applications, either simple functionality like email or notifications, or functionality which can change an application's architecture such as when you decouple components using queues.
Now as I mentioned at the start of this lesson, you're going to be learning about all of these components and how you can use them together to build platforms.
For now, just think of this as an introduction lesson.
I want you to get used to thinking of architectures from a global and regional perspective as well as understanding that application architecture is generally built using components from all of these different tiers.
So the web tier, the compute tier, caching, storage, the database tier and application services.
Now at this point, that's all of the theory that I wanted to go through.
Remember, this is just an introduction lesson.
So go ahead, finish this lesson and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video, I want to talk at a high level about the AWS Backup product.
Now, this is something that you need to have an awareness of for most of the AWS exams and to get started in the real world.
But it's not something that you need to understand in depth for all of the AWS certifications.
So let's jump in and cover the important points of the product.
So AWS Backup is a fully managed data protection service.
At this level of study, you can think about it as a backup and restore product, but it also includes much more in the way of auditing and management oversight.
The product allows you to consolidate the management and storage of your backups in one place, across multiple accounts and multiple regions if you configure it that way.
So that's important to understand the product is capable of being configured to operate across multiple accounts.
So this utilizes services like Control Tower and organizations to allow this.
And it's also capable of copying data between regions to provide extra data protection.
But the main day-to-day benefit that the product provides is this consolidation of management and storage within one place.
So instead of having to configure backups of RDS in one place, DynamoDB in another, and organize some kind of script to take regular EBS snapshots, AWS Backup can do all this on your behalf.
Now, AWS Backup is capable of interacting with a wide range of AWS products.
So many AWS services are fully supported, various compute services, so EC2 and VMware running within AWS, block storage such as EBS, file storage products such as EFS and the various different types of FSX, and then most of the AWS database products are supported such as Aurora, RDS, Neptune, DynamoDB and DocumentDB.
And then even object storage is supported using S3.
Now, all of these products can be centrally managed by AWS Backup, which means both the storage and the configuration of how the backup and retention operates.
Now, let's step through some of the key concepts and components of the AWS Backup product.
First, we have one of the central components, and that's backup plans.
It's on these where you can configure the frequency of backups, so how often backups are going to occur every hour, every 12 hours, daily, weekly or monthly.
You can also use a chron expression that creates snapshots as frequently as hourly.
Now, if you have any business backup experience, you might recognize this.
If you select weekly, you can specify which days of the week you want backups to be taken, and if you specify monthly, you can choose a specific day of the month.
Now, you can also enable continuous backups for certain supported products, and this allows you to use a point-in-time restore feature.
So if you've enabled continuous backups, then you can restore a supported service to a particular point in time within a window.
Now, you can configure the backup window as well within backup plans, so this controls the time that backups begin and the duration of that backup window.
You can configure life cycles, which define when a backup is transitioned to cold storage and when it expires.
When you transition a backup into cold storage, it needs to be stored there for a minimum of 90 days.
Backup plans also set the vault to use, and more on this in a second, and they allow you to configure region copy, so you can copy backups from one region to another.
Next, we have backup resources, and these are logically what is being backed up.
So whether you want to back up an S3 bucket or an RDS database, that's what a resource is, what resources you want to back up.
Next, we have vaults, and you can think of vaults as the destination for backups.
It's here where all the backup data is stored, and you need to configure at least one of these.
Now, vaults by default are read and write, meaning that backups can be deleted, but you can also enable AWS backup vault lock, and this is not to be confused by glacier or object locking.
AWS backup vault lock enables a write once read many, known as worm mode, for the vault.
Once enabled, you get a 72-hour cool-off period, but once fully active, nobody, including AWS, can delete anything from the vault, and this is designed for compliance-style situations.
Now, any data retention periods that you set still apply, so backups can age out, but setting this means that it's not possible to bypass or delete anything early, and the product is also capable of performing on-demand backups as required, so you're not limited to only using backup plans.
Some services also support a point-in-time recovery method, and examples of this include S3 and RDS, and this means that you can restore to the state of that resource at a specific date and time within the retention window.
Now, with all of these features, the product is constantly evolving, and rather than have this video be out of date the second something changes, I've attached a few links which detail the current state of many of these features, and I'd encourage you to take a look when you want to understand the products up-to-date capabilities when you're watching this video.
Now, this is all you need to understand as a base foundation for AWS Backup for all of the AWS exams.
If you need additional knowledge, so more theory detail in general, perhaps more specialized deep-dive knowledge on the security elements of the product, or maybe some practical knowledge, then there will be additional videos.
These will only be present if you need this additional knowledge for the particular course that you're studying.
If you only see this video, don't worry, it just means that this is all you need to know.
At this point, though, that is everything I wanted to cover, so go ahead and complete this video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to experience the difference that EFS can make to our WordPress application architecture.
Now this demo lesson has three main components.
First we're going to deploy some infrastructure automatically using the one-click deployments.
Then I'm going to step through the CloudFormation template and explain exactly how this architecture is built.
And then right at the end you're going to have the opportunity to see exactly what benefits EFS provides.
So to get started make sure that you're currently logged in to the general AWS account, so the management account of the organization, and as always you need to have the Northern Virginia region selected.
Now this lesson actually has two one-click deployments.
The first deploys the base infrastructure and the second deploys a WordPress EC2 instance, which has been enhanced to utilize EFS.
So you need to apply both of these templates in order and wait for the first one to finish before applying the second.
So we're going to start with the base VPC RDS EFS template first.
So this deploys the base VPC, the Elastic File System and an RDS instance.
Now everything should be pre-populated.
The stack should be called EFS demo -vpc -rds -efs.
Just scroll all the way down to the bottom, check the capabilities box and click on create stack.
While that's going let's switch over to the CloudFormation template and just step through exactly what it does.
So this is the template that you're deploying using the one-click deployment.
It's deploying the Base Animals for Life VPC, an EFS file system as well as mount targets and an Aurora database cluster.
So if we just scroll down we can see all of the VPC and networking resources used by the Base Animals for Life VPC.
Continue scrolling down we'll see the subnets that this VPC contains IP version 6 information.
We'll see an RDS security group, a database subnet group.
We've got the database instance.
Then we've got an instance security group which controls access to all the resources in the VPC that we use that security group on.
Then we have a rule which allows anything with that security group attached to it to communicate with anything else.
We have a rule that the WordPress instance will use and note that this includes permissions on the Elastic File System.
Then we have the instance profile that that instance uses.
Then we have the CloudWatch agent configuration and this is all automated.
And if we just continue scrolling down here we can see the Elastic File System.
So we create an EFS file system and then we create a file system mount target in each application subnet.
So we've got mount target zero which is in application subnet A which is in US East 1A.
We've got mount target one which is an application subnet B which logically is in US East 1B.
And then finally target two which is in subnet app C which is in availability zone 1C.
So we create the VPC, the database and the Elastic File System in this first one click deployment.
Now we need this to be in a create complete state before we continue with the demo lessons.
So go ahead and pause the video, wait for this to move into a create complete status and then we can use the second one click deployment.
Okay that stacks now finished creating which means we can move on to the second one click deployment.
Now there are actually two WordPress one click deployments which are attached to this lesson.
We're going to use them both but for now I want you to use the WordPress one one click deployment.
So go ahead and click on that link this will create a stack called EFS demo hyphen WordPress one.
Everything should be pre-populated just go ahead and click on create stack.
Now this is going to use the infrastructure provided by that first one click deployment.
So it's going to use EFS demo hyphen VPC hyphen RDS hyphen EFS and let's quickly step through exactly what this is doing while it's provisioning.
So this is the cloud formation template that is being used and we can skip past most of this.
What I want to focus on is the resource that's being created so that's WordPress EC2.
So this is using cross stack references to import a lot of the resources created in that first cloud formation stack.
So it's importing the instance profile to use it's importing the web a subnet so it knows where to place this instance.
And it's importing the instance security group that's created in that previous cloud formation stack.
Now in addition to this if we look through the user data for this WordPress instance one major difference is that it's mounting the EFS file system into this folder.
So forward slash var forward slash w w w forward slash HTML forward slash WP hyphen content.
Now if you remember from earlier demo lessons this is the folder which WordPress users to store its media.
So now instead of this folder being on the local EC2 file system this is now the EFS file system.
The EFS file system is mapped into this folder on this WordPress instance.
Other than that everything else is the same WordPress is installed.
It's configured to you as the RDS instance the cow say custom login banner is displayed.
It automatically configures the cloud watch agent and then it signals cloud formation that it's finished provisioning this instance.
Now what we'll end up with when this stack has finished creating is an EC2 instance which will use the services provided by this original stack.
So let's just refresh this.
It's still in progress so go ahead and pause the video and wait for this stack to move into a create complete state and then we good to continue.
So this stacks now finished creating and if we move across to the EC2 console so click on services locate EC2 right click and open that in a new tab.
Then click on instances running and you'll see that we have this A4L WordPress instance.
Now if we select that copy the IP address into your clipboard and then open that in a new tab we need to perform the WordPress installation.
So go ahead and enter the site title the best cats and add some exclamation points.
For username we need to use admin then for the password go back to the cloud formation stack and click on parameters and we're going to use the DB password.
So copy that into your clipboard then go back paste it into the password box and then put test at test.com for the email address and click install WordPress.
Then as before we need to log in so click on login admin for username reenter that password and click on login.
Then we need to go to posts we need to click on trash below hello world to delete that post then click on add new close down this dialogue.
For title put the best cats ever and some exclamation points then click on the plus click gallery click upload.
There's a link attached to this lesson with four cat images so go ahead and download that link and extract it locate those four images select them and click on open.
And then once you've done that click on publish and publish again and then click on view post.
Now what that's doing in the background is it's adding these images to the WP hyphen content folder which is on the EC two instance but now we have that folder mounted using EFS and so the images are being stored on the elastic file system rather than the local instance file system.
The cat pictures are there but what we're going to do to validate this is to go back to instances right click on this a four L hyphen WordPress instance and click on connect and then connect to this instance using EC two instance connect.
Now once we connected to the instance you CD space forward slash VAR forward slash WWW forward slash HTML and then do an LS space hyphen LA to do a full listing you'll see that we have this WP hyphen content folder.
So type CD space WP hyphen content and press enter then we'll clear the screen and do an LS space hyphen LA and then inside this folder we have plugins themes and uploads go into the uploads folder do an LS space hyphen LA depending on when you do this demo lesson you should see a folder representing the year so move into that folder then a folder representing the month again this will vary depending on when you do the demo lesson.
Move into that folder and then you should see all four of my cat images and if you do a DF space hyphen K you'll be able to see that this folder so forward slash VAR forward slash WWW forward slash HTML WP hyphen content this is actually mounted using EFS so this is an EFS file system.
Now this means the local instance file system is no longer critical it no longer stores the actual media that we upload to these posts so what we can do is we can go back to cloud formation go to stacks select the EFS demo hyphen WordPress one stack and then click on delete and delete that stack so that's going to terminate the EC two instance that we've just used to upload that media.
We need to wait for that stack to fully delete before continuing so go ahead and pause the video and wait for this stack to disappear so that stacks disappeared and now there's a second WordPress one click deployment link attached to this lesson remember there are two so now go ahead and click on the second one this one should create a stack called EFS demo hyphen WordPress two scroll to the bottom and click on create stack that's going to create a new stack and a new EC two instance.
So while we're doing this just close down all of these additional tabs at the top of the screen close them all down apart from the cloud formation one.
We're going to need to wait for this to finish provisioning and move into the create complete state so again pause the video wait for this to change into create complete and then we go to to continue.
After a few minutes the WordPress two stack has moved into a create complete state click on services open the EC two console in a new tab click on instances running you'll see a new A4L hyphen WordPress instance this is a brand new instance which has been provisioned using the one click deployment link that you've just used so the WordPress two one click deployment link.
If we select this copy the public IP address into your clipboard and open that in a new tab it again loads our WordPress blog if we open the blog post.
Now we can see these images because they're being loaded from EFS from the file system that EFS provides so no longer are we limited to only operating from a single EC two instance for our WordPress application because now there's nothing which gets stored specifically on that EC two instance.
Instead everything stored on EFS and accessible from any EC two instance that we decide to give permissions to know what we can do to demonstrate this if we go back to cloud formation.
Now remember attached to this lesson are two WordPress one click deployments we initially applied number one then we deleted that and applied number two so now I want you to reapply number one.
So again click on the WordPress one one click deployment this again will create a new stack this time called EFS demo hyphen WordPress one click on create stack you need to wait for this to move into a create complete state so pause the video and resume it once the stack changes to create complete after a few minutes this stack also moves into create complete.
Let's click on resources we can see it's provisioned a single EC two instance so let's click on this to move directly to this new instance select it copy this instance is IP address into your clipboard and open that in a new tab and again we have our WordPress blog and if we click on the post it loads those images so now we have a number of EC two instances we have to EC two instances both with WordPress installed both using the same RDS data.
And both using the shared file system provided by EFS and it means that if any posts are edited or any images uploaded on either of these two EC two instances then those updates will be reflected on all other EC two instances and this means that we've now implemented this architecture that's on screen now and this is what's going to support us when we evolve this architecture more and add scalability in an upcoming section of the core.
For now though we've just been focused on the shared file system now all that remains at this point is for us to tidy up the infrastructure that we've used in this demo lesson so close down all of these tabs we need to be at the cloud formation console we need to start by deleting EFS demo WordPress one and WordPress two so pick either of those click delete and then delete stack then select the other delete and then delete stack.
Now we need both of these to finish deleting and then we can delete this last stack so go ahead and pause the video wait for both of these to disappear and then we can resume both of those have deleted so now we can click the final stack EFS demo hyphen VPC hyphen RDS hyphen EFS so select that delete and then delete stack and that's everything that you need to do in this demo lesson and once that stacks finished deleting the account will be in the same state as it was at the start of this.
Now I hope you've enjoyed this demo lesson and that it's been useful what you've implemented in this demo is one more supportive step towards us moving this architecture from being a monolith through to being fully elastic.
Now the application is in this state where we have a single shared RDS database for all of our application instances and we're also using a shared file system provided by EFS and this means that we can have one single EC2 instance we could have two EC2 instances or even 200 all of them sharing the same database and the same shared file system provided by EFS.
Now in an upcoming section of this course we're going to extend this further by creating a launch template which automatically builds EC2 instances as part of this application architecture.
We're going to utilize auto scaling groups together with application load balancers to implement an architecture which is fully elastic and resilient and this has been one more supportive step towards that objective.
At this point though that's everything that you needed to do in this demo lesson so go ahead complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I'm going to be covering a really useful product within AWS, the Elastic File System, or EFS.
It's a product which can prove useful for most AWS projects because it provides network-based file systems which can be mounted within Linux EC2 instances and used by multiple instances at once.
For the Animals for Life WordPress example that we've been using throughout the course so far, it will allow us to store the media for posts outside of the individual EC2 instances, which means that the media isn't lost when instances are added and removed, and that provides significant benefits in terms of scaling as well as self-healing architecture.
In summary, we're moving the EC2 instances to a point where they're closer to being stateless.
So let's jump in and step through the EFS architecture.
The EFS service is an AWS implementation of a fairly common shared storage standard called NFS, the Network File System, specifically version 4 of the Network File System.
With EFS, you create file systems which are the base entity of the product, and these file systems can be mounted within EC2 Linux instances.
Linux uses a tree structure for its file system.
Devices can be mounted into folders in that hierarchy, and EFS file system, for example, could be mounted into a folder called forward slash NFS forward slash media.
What's more impressive is that EFS file systems can be mounted on many EC2 instances, so the data on those file systems can be shared between lots of EC2 instances.
Now keep this in mind as we talk about evolving the architecture of the Animals for Life WordPress platform.
Remember, it has a limitation that the media for posts, so images, movies, audio, they're all stored on the local instance itself.
If the instance is lost, the media is also lost.
EFS storage exists separately from an EC2 instance, just like EBS exists separately from EC2.
Now EBS is block storage, whereas EFS is file storage, but Linux instances can mount EFS file systems as though they are connected directly to the instance.
EFS is a private service by default.
It's isolated to the VPC that it's provisioned into.
Architecturally, access to EFS file systems is via mount targets, which are things inside a VPC, but more on this next when we step through the architecture visually.
Now even though EFS is a private service, you can access EFS file systems via hybrid networking methods that we haven't covered yet, so if your VPC is connected to other networks, then EFS can be accessed over those.
So using VPC peering, VPN connections, or AWS Direct Connect, which is a physical private networking connection between a VPC and your existing on-premises networks.
Now don't worry about those hybrid products, I'll be covering all of them in detail later in the course.
For now though, just understand that EFS is accessible outside of a VPC using these hybrid networking products as long as you configure this access.
So let's look at the architecture of EFS visually.
Architecturally, this is how it looks.
EFS runs inside a VPC, in this case the Animals for Life VPC.
Inside EFS, you create file systems and these use POSIX permissions.
If you don't know what this is, I've included a link attached to the lesson which provides more information.
Super summarized though, it's a standard for interoperability which is used in Linux.
So a POSIX permissions file system is something that all Linux distributions will understand.
Now the EFS file system is made available inside a VPC via mount targets and these run from subnets inside the VPC.
The mount targets have IP addresses taken from the IP address range of the subnet that they're inside and to ensure high availability, you need to make sure that you put mount targets in multiple availability zones, just like NAT gateways for a fully highly available system, you need to have a mount target in every availability zone that a VPC uses.
Now it's these mount targets that instances use to connect to the EFS file systems.
Now it's also possible, as I touched on on the previous screen, that you might have an on-premises network and this generally would be connected to a VPC using hybrid networking products such as VPNs or Direct Connect and any Linux-based server that's running on this on-premises environment can use this hybrid networking to connect through to the same mount targets and access EFS file systems.
Now before we move on to a demo where you'll get the practical experience of creating a file system and accessing it from multiple EC2 instances, there are a few things about EFS which you should know for the exam.
First, EFS is for Linux-only instances.
From an official AWS perspective, it's only officially supported using Linux instances.
EFS offers two performance modes, general purpose and max IO.
General purpose is ideal for latency-sensitive use cases, web servers, content management systems, it can be used for home directories or even general file serving as long as you're using Linux instances.
Now general purpose is the default and that's what we'll be using in this section of the course within the demos.
Max IO that can scale to higher levels of aggregate throughput and operations per second but it does have a trade-off of increased latencies.
So max IO mode suits applications that are highly parallel.
So if you've got any applications or any generic workloads such as big data, media processing, scientific analysis, anything that's highly parallel then it can benefit from using max IO but for most use cases just go with general purpose.
There are also two different throughput modes, bursting and provisioned.
Bursting mode works like GP2 volumes inside EBS so it has a burst pool but the throughput of this type scales with the size of the file systems.
So the more data you store in the file system the better performance that you get.
With provisioned you can specify throughput requirements separately from size.
So this is like the comparison between GP2 and IO1.
With provisioned you can specify throughput requirements separate from the amount of data you store so that's more flexible but it's not the thing that's used by default.
Generally you should pick bursting.
Now for the exam you don't need to remember the raw numbers but I have linked some in the lesson description if you want additional information.
So you can see the different performance characteristics of all of these different options.
Now Amazon EFS file systems have two storage classes available.
We've got infrequent access or IA and that storage class is a lower cost storage class which is designed for storing things that are infrequently accessed.
So if you need to store data in a cost effective way but you don't intend to access it often then you can use infrequent access.
Next we've got standard and the standard storage class is used to store frequently accessed files.
It's also the default and you should consider it the default when picking between the different storage classes.
Conceptually these mirror the trade-offs of the S3 object storage classes.
Use standard for data which is used day to day and infrequent access for anything which isn't used on a consistent basis.
And just like S3 you have the ability to use life cycle policies which can be used to move data between classes.
Okay so that's the theory of EFS.
It's not all that difficult a product to understand but you do need to understand it architecturally for the exam and so to help with that it's now time for a demo.
I want you to really understand how EFS works.
It's something that you probably will use if you use AWS for any real world projects.
Now the best way to understand it is to use it and so that's what we're going to do in the next lesson which is a demo.
You're going to have the opportunity to create an EFS file system, provision some EC2 instances and then mount that file system within both EC2 instances, create a test file and see that that's accessible from both of those instances.
Proving that EFS is a shared network file system.
But at this point that's all of the theory that I wanted to cover so go ahead finish up this video and when you're ready I look forward to you joining me in the demo lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, this is part two of this lesson.
We're going to continue immediately from the end of part one, so let's get started.
Okay, so all three of these mount targets are now in an available state and that means we can connect into this EFS file system from any of the availability zones within the Animals for Life VPC.
So what we need to do is test out this process and we're going to interact with this file system from our EC2 instances.
So move back to the tab where we have the EC2 console open.
And at this point I want you to either, and this depends on your browser, I'll either want you to right click and duplicate this tab to open another identical copy.
If you can't do this in your browser then just open a new tab and copy and paste this URL into that tab.
You'll end up with two separate tabs open to the same EC2 screen.
So on the first tab we're going to connect to A4L-EFS instance A.
So right click and then select connect.
We're going to use instance connect.
So make sure the username is EC2-user and then click on connect.
Now right now this instance is not connected to this EFS file system and we can verify that by running a DF space-k and press enter.
You'll see that nowhere here is listed this EFS file system.
These are all volumes directly attached to the EC2 instance and of course the boot volume is provided by EBS.
Now within Linux all devices or all file systems are mounted into a folder.
So the first thing that we need to do to interact with EFS is to create a folder for the EFS file system to be mounted into.
And we can do that using this command so shudu space mkdir space-p space/efs/wp-content.
Now the hyphen p option just means that everything in this path will be created if it's not already.
So this will create forward/EFS if it doesn't already exist.
So press enter to create that folder.
So I'm going to clear the screen to keep this easy to see.
And the next thing I need to do is to install a package of tools which allows this instance or specifically the operating system to interact with the EFS product.
Now the command I'm going to use to install these tools is shudu to give us admin permissions and then DNF which is the package manager for this operating system.
And then a space hyphen y to automatically acknowledge any prompts and then a space and then install because I want to install a package and then a space.
And then the name of the tools that I want to install is amazon hyphen EFS hyphen utils.
So this is a set of tools which allows this operating system to interact with EFS.
So go ahead and press enter and that will install these tools and then we can configure the interaction between this operating system and EFS.
Again I'm going to clear the screen to keep this easy to see and I want to mount this EFS file system in that folder that we've just created.
But specifically I want it to mount every time the instance is restarted.
So of course that means we need to add it to the FSTAB file.
Now if you remember this file from elsewhere in the course it's contained within the forward/ETC folder.
So we need to move into that folder cd///ETC and then the file is called FSTAB.
So we need to run shudu to give us admin permissions and then nano which is a text editor and then the name of the file which is FSTAB.
So press enter and the file will likely have only one or two lines which is the root and/or boot volume of this instance.
So let's just move to the end because we're going to add a new line and this is contained within the lesson commands document but we're going to paste in this line.
So this line tells us that we want to mount this file system ID so file system ID colon forward/.
We want to mount that into this folder so forward/efs forward/wp-content.
We tell it that the file system type is EFS.
Remember EFS is actually based on NFS which is the network file system but this is one provided by AWS as a service and so we use a specific AWS file system which is EFS.
And the support for this has been installed by that tools package which we just installed.
Now the exact functionality of this is beyond the scope of this course but if you do want to research further then go ahead and investigate exactly what these options do.
What we need to do though is to point it at our specific EFS file system.
So this is this component of the line all the way from the start here to this forward/.
So to get the file system ID we need to go back to the EFS console and we need to copy down this full file system ID and yours will be different so make sure you copy your own file system ID into the clipboard.
Then go back here and select the colon and then delete all the way through to the start of this line.
And once you've done that paste in your file system ID what it should look like is the file system ID then a colon and then a forward/.
So at this point we need to save this file so control O to save and then enter and control X to exit.
Again I'm going to clear the screen to make it easier to see.
Then I'll run a DF space -K and this is what the file systems currently attached to this instance look like.
Then we're going to mount the EFS file system into the folder that we've created and the way that we do this is with this command.
So shudu mount and then we specify the name of the folder that we want to mount.
Now the way that this works is that this uses what we've just defined in the FSTAB file.
So we're going to mount into this folder whatever file system is defined in that file.
So that's the EFS file system and if we press enter after a few moments it should return back to the prompt and that's mounted that file system.
There we go we back at the prompt and if we do a DF space -K again we'll see that now we've got this extra line at the bottom.
So this is the EFS file system mounted into this folder.
Now to show you that this is in fact a network file system let's go ahead and move into that folder using this command.
And now that we're in that folder we're going to create a file.
So we're going to use shudu so that you have admin privileges and then we're going to use the command touch which if you remember from earlier in the course just creates an empty file.
And we're going to call this file amazing test file dot txt.
Go ahead and press enter and then do an LS space -LA and you'll see that we now have this file created within this folder.
And while we're creating it on this EC2 instance it's actually put this file on a network file system.
Now to verify that let's move back to the other tab that we have open to the EC2 console the one that's still on this running instances screen.
And now let's go ahead and connect to instance B.
So right click on instance B select connect again instance connect verify the username is as it should be and click on connect.
So now we're on instance B.
Let's do a DF space -K to verify that we don't currently have any EFS file system mounted.
Next we need to install the EFS tools package so that we can mount this file system.
So let's go ahead and install that package clear the screen to make it easier to see then we need to create the folder that we're going to be mounting this file system into.
We'll use the same command as on instance A.
Then we need to edit the FSTAB file to add this file system configuration.
So we'll do that using this command so shudu space nano space forward slash ETC forward slash FSTAB press enter.
Remember this is instance B so it won't have the line that we added on instance A.
So we need to go down to the bottom paste in this placeholder and then we need to replace the file system ID at the start with the actual file system ID.
So delete this leaving the colon and forward slash go back to the EFS console copy the file system ID into your clipboard.
Move back to this instance paste that in everything looks good.
Save that file with control O press enter exit with control X then we back at the prompt clear the screen.
We'll use the shudu mount forward slash EFS forward slash WP hyphen content command again to mount the EFS file system onto this instance and again it's using the configuration that we've just defined in the FSTAB file press enter.
After a few moments you'll be placed back at the prompt we can verify whether this is mounted with DF space hyphen K.
It has mounted by the looks of things it's at the bottom.
So now if we move into that folder so CD forward slash EFS forward slash WP hyphen content forward slash and press enter.
We now in that folder and if we do a listing so LS space hyphen LA what we'll see is the amazing test file dot txt which was created on instance A.
So this proves that this is a shared network file system where any files added on one instance are visible to all other instances.
So EFS is a multi user network based file system that can be mounted on both EC2 Linux instances as well as on premises physical or virtual servers running Linux.
Now this is a simple example of how to use EFS for now we've done everything that we need to do in this demo lesson so we just need to clean up all of the infrastructure that we've used to do that.
Go back to the EFS console we're going to go ahead and delete this file system so we should already have it selected just select delete you'll need to confirm that process by pasting in the file system ID.
So go ahead and put your file system ID and then select confirm.
Now that can take some time to delete and you'll need to wait for this process to complete.
Once it has completed we're going to go ahead and move across to the cloud formation console.
You should still have this open in a tab if you don't just type cloud formation in the search box at the top and then move to the cloud formation console.
You should still have the stack name of implementing EFS which is the stack you created at the start with the one click deployment.
Go ahead and select this stack then click on delete and confirm that deletion and once that finishes deleting that's all of the infrastructure gone that we've created in this demo lesson.
So I hope this has been a fun and enjoyable demo lesson where you've gained some practical experience of working with EFS at this point though that is everything that you need to do in this demo lesson.
So go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson I want to give you some abstract practical experience of using the Elastic File System or EFS.
Now we're going to need some infrastructure.
Before we apply that as always make sure that you're logged into the general AWS account, so the management account of the organization and you'll need the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link so go ahead and click that.
This is going to provision some infrastructure.
It's going to take you to the quick create stack screen and everything should be pre-populated.
You'll just need to scroll to the bottom, check the box beneath capabilities and then click on create stack.
You're also going to be typing some commands within this demo lesson so also attached to this lesson is a lesson commands document.
Go ahead and open that in a new tab.
So this is just a list of the commands that we're going to be using during the demo lesson and there are some placeholders such as file system ID that you'll need to replace as we go but make sure you've got this open for reference.
Now we're going to need this stack to be in a create complete state before we continue with the demo lesson so go ahead pause the video and resume it once your stack moves into a create complete state.
Okay so the stacks now moved into a create complete status and what this has actually done is create the animals for life base VPC as well as a number of EC2 instances.
So if we go to the EC2 console and click on instances running you'll note that we've created a for L - EFS instance A and a for L - EFS instance B and we're going to be creating an EFS file system and mount points and then mounting that on both of these instances and interacting with the data stored on that file system.
We're going to get you the experience of working with a network shared file system so let's go ahead and do that.
So to get started we need to move to the EFS console so in the search box at the top just type EFS and then open that in a brand new tab.
We're going to leave this tab open to the instances part of the EC2 console because we're going to come back to this very shortly.
So let's move across to the EFS console that we have open in a separate tab and the first step is to create a file system so a file system is the base entity of the elastic file system product and that's what we're going to create.
Now you've got two options for setting up an EFS file system you can use this simple dialogue or you can click on customize to customize it further.
So if we're using the simple dialogue we'd start by naming the file system so let's say we use A4L - EFS and then you'd need to pick a VPC for this file system to be provisioned into and of course we'd want to select the animals for life VPC.
Now we want to customize this further we don't want to just accept these high-level defaults so we need to click on customize.
This is going to move us to this user interface which has many more options so we've still got the A4L - EFS name for this file system.
Now for the storage class we're going to pick standard which means the data is replicated across multiple availability zones.
If you're doing this in a test or development environment or you're storing data which is not important then you can choose to use one zone which stores data redundantly but only within a single AZ.
Now again in this demonstration we are going to be using multiple availability zones so make sure that you pick standard for storage class.
You're able to configure automatic backups of this file system using AWS backup and if you're taking an appropriate certification course this is something which I'll be covering in much more detail.
You can either enable this or disable it obviously for a production usage you'd want to enable it but for this demonstration we're going to disable it.
Now EFS as I mentioned in the theory lesson comes with different classes of storage and you can configure lifecycle management to move files between those different storage classes so if you want to configure lifecycle management to move any files not accessed for 30 days you can move those into the infrequent access storage class and you can also transition out of infrequent access when anything is accessed so go ahead and select on first access for transition out of IA.
So in many ways this is like S3 with the different classes of storage for different use cases.
When you're creating a file system you're able to set different performance and throughput modes.
For throughput mode you can choose between bursting and enhanced.
If you pick enhanced you're able to select between elastic and provisioned.
I've talked more about these in the theory lesson.
We're going to pick bursting.
Now for performance you can choose between general purpose and max I/O.
General purpose is the default and rightfully so and you should use this for almost all situations.
Only use max I/O if you want to scale to really high levels of aggregate throughput and input output operations per second so only select it if you absolutely know that you need this option.
You've also got the ability to encrypt the data on the file system and if you do encrypt it it uses KMS and you need to pick a KMS key to use.
Of course this means that in order to interact with objects on this file system permissions are needed both on the EFS service itself as well as the KMS key that's used for the encryption operation.
Now this is something that you will absolutely need to use for production usage but for this demonstration we're going to switch it off.
We won't be setting any tags for this file system so let's go ahead and click on next.
You need to configure the network settings for this file system so specifically the mount targets that will be created to access this file system.
Now best practice is that any availability zones within a VPC where you're consuming the services provided by EFS you should be creating a mount target so in our case that's US - East - 1A, 1B and 1C.
So we're going to go through and configure this so first let's delete all of these default security group assignments.
Every mount target that you create will have an associated security group so we'll be setting these specifically.
For now though we need to choose the application subnet in each of these availability zones so in the top drop-down which is US - East - 1A I'm looking for app A so go ahead and do the same.
In US - East - 1B I want to select the app B subnet and then in US - East - 1C logically I'll be selecting the app C subnet so that's app A, app B and app C.
Now for security groups the CloudFormation 1 click deployment has provisioned this instance security group and by default this security group allows all connections from any entities which have this attached so this is a really easy way that we can allow our instances to connect to these mount targets so for each of these lines go ahead and select the instance security group you'll need to do that for each of the mount targets so we'll do the second one and then we'll do the third one and that's all of the network configuration options that we need to worry about so click on next it's here where you can define any policies on the file system so you can prevent root access by default you can enforce read only access by default you can prevent anonymous access or you can enforce encryption in transit for all clients connected to this EFS file system so any clients that connect to the mount targets to access the file system you can ensure that that uses encryption in transit and if you're using this in production you might want to select at least this last option to improve security for this demo lesson we're not going to use any of these policy options nor are we going to define a custom policy in the policy editor instead we'll just click on next at this point we just need to review everything's to our satisfaction everything looks good so we're going to scroll down to the bottom and just click on create now in order to continue with this demo lesson we're going to need both the file system and all of its mount targets so go into the file system click on network and you'll see three mount targets being created all three of these need to be ready before we can continue the demo lesson so this seems like a great time to end part one of this demo lesson go ahead and finish this video and then when all of these mount targets are ready to go you can start part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover a service which starts to feature more and more on the exam the database migration service known as DMS.
Now this lesson is an extension of my lesson from the Associate Architect course so even if you've taken that course and watched that lesson you should still watch this lesson fully.
Now this product is something which as well as being on the exam if you're working as a Solutions Architect in the AWS space and if your projects involve databases you will extensively use this product it's something that you need to be aware of regardless so let's jump in and get started.
Database migrations are complex things to perform normally if we exclude the vendor tooling which is available it's a manual process end to end it usually involves setting up replication which is pretty complex or it means taking a point in time back up and restoring this to the destination database but how do you handle changes which occur between taking that back up and when the new database is live how do you handle migrations between different databases these are all things where DMS comes in handy it's essentially a managed database migration service the concept is simple enough it starts with a replication instance which runs on EC2 this instance runs one or more replication tasks you need to define a source and destination endpoints which point at the source and target databases and the only real restriction with the service is that one of the endpoints must be running within AWS you can't use the product for migrations between two on-premises databases now you don't actually need to have any experience using the product but there will be a demo lesson elsewhere in this section which gives you some practical exposure for this theory lesson though we need to focus on the architecture so let's continue by reviewing that visually using DMS is simple enough architecturally you start with a source and target database and one of those needs to be within AWS the databases themselves can use a range of compatible engines such as MySQL Aurora Microsoft SQL MariaDB MongoDB PostgreSQL Oracle Azure SQL and many more now in between these conceptually is the database migration service known as DMS which uses a replication instance essentially an EC2 instance with migration software and the ability to communicate with the DMS service now on this instance you can define replication tasks and each of these replication instances can run multiple replication tasks tasks define all of the options relating to the migration but architecturally two of the most important things are the source and destination endpoints which store the replication information so that the replication instance and task can access the source and target databases so a task essentially moves data from the source database using the details in the source endpoint to the target database using the details stored in the destination endpoint configuration and the value from DMS comes in how it handles those migrations now jobs can be one of three types we have full load migrations and these are used to migrate existing data so if you can afford an outage long enough to copy your existing data then this is a good one to choose this option simply migrates the data from your source database to your target database and it creates the tables as required next we have full load plus CDC and this stands for change data capture and this migrates existing data and replicates any ongoing changes this option performs a full load migration and at the same time it captures any changes occurring on the source after the full load migration is complete then captured changes are also applied to the target eventually the application of changes reaches a steady state and at this point you can shut down your applications let the remaining changes flow through to the target and then restart your applications and point them at the new target database finally we've got CDC only and this is designed to replicate only data changes in some situations it might be more efficient to copy existing data using a method other than AWS DMS also certain databases such as Oracle have their own export and import tools and in these cases it might be more efficient to use those tools to migrate the initial data and then use DMS simply to replicate the changes starting at the point when you do that initial bulk load so CDC only migrations are actually really effective if you need to bulk transfer the data in some way outside of DMS now lastly DMS doesn't natively support any form of schema conversion but there is a dedicated tool in AWS known as the schema conversion tool or SCT and the sole purpose of this tool is to perform schema modifications or schema conversions between different database versions or different database engines so this is a really powerful tool that often goes hand-in-hand with migrations which are being performed by DMS now DMS is a great tool for migrating databases from on-premises to AWS it's a tool that you will get to use for most larger database migrations so as a solutions architect it's another tool which you need to understand end-to-end in the exam if you see any form of database migration scenario as long as one of the databases is within AWS and as long as there are no weird databases involved which aren't supported by the product then you can default to using DMS it's always a safe default option for any database migration questions if the question talks about a no downtime migration then you absolutely should default to DMS now at this point let's talk in a little bit more detail about a few aspects of DMS which are important first I want to talk about the schema conversion tool or SCT in a little bit more detail so this is actually a standalone application which is only used when converting from one database engine to another it can be used as part of migrations where the engines being migrated from and to aren't compatible and another use case is that it can be used for larger migrations when you need to have an alternative way of moving data between on-premises and AWS rather than using a data link now SCT is not used and this is really important it's not used for movements of data between compatible database engines for example if you're performing a migration from an on-premises MySQL server to an AWS based RDS MySQL server then the engines are the same even though the products are different the engines are the same and so SCT would not be used SCT works with OLTP databases such as MySQL, Microsoft SQL and Oracle and also OLAP databases such as Teradata, Oracle, Vertica and even Green Plum now examples of the types of situations where the schema conversion tool would be used include things like on-premises Microsoft SQL through to AWS RDS MySQL migrations because the engine changes from Microsoft SQL to MySQL and then we could also use SCT for an on-premises Oracle to AWS based Aurora database migration again because the engines are changing now there is another type of situation where DMS can be used in combination with SCT and that's for larger migrations so DMS can often be involved with large-scale database migrations so things which are multi terabytes in size and for those types of projects it's often not optimal to transfer the data over the network it takes time and it consumes network capacity that might be used heavily for normal business operations so DMS is able to utilize the snowball range of products which are available for bulk transfer of data into and out of AWS so you can use DMS in combination with snowball and this actually uses the schema conversion tool so this is how it works so step one you use the schema conversion tool to extract the data from the database and store it locally and then move this data to a snowball device which you've ordered from AWS step two is that you ship that device back to AWS they load that data into an S3 bucket and then DMS migrates from S3 into the target store so the target database if you decide to use change data capture then you can also migrate changes since the initial bulk transfer these also use S3 as an intermediary before being written to the target database by DMS so DMS normally will transfer the data over the network it can transfer over direct connect or a VPN or even a VPC peer but if the data volumes that you're migrating are bigger than you can practically transfer over your network link then you can order a snowball and use DMS together with SCT to make that transfer much quicker and more effective now the rule to remember for the exam is that SCT is only used for migrations when the engine is changing and the reason why SCT is used here is because you're actually migrating a database into a generic file format which can be moved using snowballs and so this doesn't break the rule of only doing it when the database engine changes because you are essentially changing the database you're changing it from whatever engine the source uses and you're storing it in a generic file format for transfer through to AWS on a snowball device now that's everything that I wanted to cover in this lesson and this has been an extension of the coverage which I did at the associate architect level you are going to get the chance to experience this product practically in a demo but in this lesson I just wanted to cover the theory so thanks for watching go ahead and complete this lesson and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to talk about a feature of RDS called RDS proxy.
This is something which is important to know in its own right but it also supports many other architectures involving RDS.
Now we've got a lot to cover so let's jump in and get started.
Before we talk about how RDS proxy works let's step through why you might want to use the product.
First, opening and closing connections to databases takes time and consumes resources.
It's often the bulk of many smaller database operations.
If you only want to read and write a tiny amount, the overhead of establishing a connection can be significant.
This can be especially obvious when using serverless because if you have a lot of lambda functions invoking or accessing an RDS database for example then that's a lot of connections to constantly open and close especially when you're only built for the time that you're using compute as with lambda.
Now another important element is that handling failure of database instances is hard.
How long should you wait for the connection to work?
What should your application do while waiting?
When should it consider it a failure?
How should it react?
And then how should it handle the failover to the standby instance in the case of RDS?
And doing all of this within your application adds significant overhead and risk.
A database proxy is something that can help but maybe you don't have any database proxy experience and even if you do can you manage them at scale?
Well that's where RDS proxy adds value.
At a high level what RDS proxy does or indeed any database proxy is change your architecture.
Instead of your application connecting to a database every time they use it instead they connect to a proxy and the proxy maintains a pool of connections to the database which are open for the long term.
Then any connections to the proxy can use this already established pool of database connections.
It can actually do multiplexing where it can maintain a smaller number of connections to a database versus the connections to the proxy.
A multiplex requests over the connection pool between the proxy and the database.
So you can have a smaller number of actual connections to the database versus the connections to the database proxy.
And this is especially useful for smaller database instances where resources are at a premium.
So in terms of how an architecture might look using RDS proxy let's start with this.
A VPC in US East one with three availability zones and three subnets in each of those availability zones.
In AZB we have a primary RDS instance replicating to a standby running in AZC.
Then we have Categoram our application running in the web subnets in the middle here and the application makes use of some lambda functions which are configured to use VPC networking and run from the subnet in availability zone B and so there's a lambda ENI in that subnet.
Without RDS proxy the Categoram application servers will be connecting directly to the database every time they needed to access data.
Additionally every time one of those lambda functions invoked they would need to directly connect to the database which would significantly increase their running time.
With RDS proxy though things change.
So the proxy is a managed service and it runs only from within a VPC in this case across all availability zones A, B and C.
Now the proxy maintains a long term connection pool in this case to the primary node of the database running in AZB.
These are created and maintained over the long term.
They're not created and terminated based on individual application needs or lambda function invocations.
Our clients in this case the Categoram EC2 instances and lambda functions connect to the RDS proxy rather than directly to the database instances.
Now these connections are quick to establish and place no load on the database server because they're between the clients and the proxy.
Now at this point the connections between the RDS proxy and database instances can be reused.
This means that even if we have constant lambda function invocation they can reuse the same set of long running connections to the database instances.
More so multiplexing is used so that a smaller number of database connections can be used for a larger number of client connections and this helps reduce the load placed on the database server even more.
RDS proxy even helps with database failure or failover events because it abstracts these away from the application.
The clients we have can connect to the RDS proxy instances and wait even if the connection to the back end database isn't operational and this is a situation which might occur during failover events from the primary to the standby.
In the event that there is a failure the RDS proxy can establish new connections to the new primary in the background.
The clients stay connected to the same endpoint, the RDS proxy and they just wait for this to occur.
So that's a high level example architecture.
Let's look at when you might want to use RDS proxy and this is more for the exam but you need to have an appreciation for the types of scenarios where RDS proxy will be useful.
So you might decide to use it when you have errors such as too many connection errors because RDS proxy helps reduce the number of connections to a database and this is especially important if you're using smaller database instances such as T2 or T3.
So anything small or anything burst related.
Additionally, it's useful when using AWS Lambda because you're not having the per invocation database connection setup usage and termination.
It can reuse a long running pool of connections maintained by the RDS proxy and it can also use existing IAM authentication which the Lambda functions have access to via their execution role.
Now RDS proxy is also useful for long running applications such as SAS apps where low latency is critical.
So rather than having to establish database connections every time a user interaction occurs they can use this existing long running connection pool.
RDS proxy is also really useful where resilience to database failure is a priority.
Remember your clients connect to the proxy and the proxy connects to the backend databases so it can significantly reduce the time for a failover event and make it completely transparent to your application.
So this is a really important concept to grasp because your clients are connected to the single RDS proxy endpoint even if a failover event happens in the background instead of having to wait for the database C name to move from the primary to the standby your applications are transparently connected to the proxy and they don't realize it's a proxy they think they're connecting to a database.
The proxy though is handling all of the interaction between them and the backend database instances.
Now before we finish up I want to cover some key facts about RDS proxy think of these as the key things that you need to remember for the exam.
So RDS proxy is a fully managed database proxy that's usable with RDS and Aurora.
It's auto scaling and highly available by default so you don't need to worry about it and this represents a much lower admin overhead versus managing a database proxy yourself.
Now it provides connection pooling which significantly reduces database load.
Now this is for two main reasons.
Firstly we don't have the constant opening and closing of database connections which does put unnecessary stress on the database but in addition we can also multiplex to use a lower number of connections between the proxy and the database relative to the number of connections between the clients and the proxy.
So this is really important.
Now RDS proxy is only accessible from within a VPC so you can't access this from the public internet it needs to occur from a VPC or from private VPC connected networks.
Accesses to the RDS proxy use a proxy endpoint and this is just like a normal database endpoint it's completely transparent to the application.
An RDS proxy can also enforce SSL TLS connections so it can enforce these to ensure the security of your applications and it can reduce fail over time by over 60% in the case of Aurora.
This is somewhere in the region of 66 to 67% improvement versus connecting to Aurora directly.
Critically it abstracts the failure of a database away from your application so the application connected to an RDS proxy will just wait until the proxy makes a connection to the other database instance.
So during a failover event where we're failing over from the primary to the standby the RDS proxy will wait until it can connect to the standby and then just continue fulfilling requests from client connections and so it abstracts away from underlying database failure.
Now at this point that is everything I wanted to cover in this high level lesson on RDS proxy so go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about an advanced feature of Amazon Aurora, multi master rights.
This feature allows an Aurora cluster to have multiple instances which are capable of performing both reads and writes.
This is in contrast with the default mode for Aurora which only allows one writer and many readers.
So let's get started and look at the architecture.
So just to refresh where we are the default Aurora mode is known as single master and this equates to one read write instance so one database instance that can perform read and write operations and then in addition it can also have zero or more read only replicas.
Now an Aurora cluster that's running in the default mode of single master has a number of endpoints which use to interact with the database.
We've got the cluster endpoint which can be used for read or write operations and then we've got another endpoint, a read endpoint that's used for load balancing reads across any of the read only replicas inside the cluster.
An important consideration with an Aurora cluster running in single master mode is that failover takes time for a failover to occur.
A replica needs to be promoted from read only mode to read write mode.
In multi master mode all of the instances by default are capable of both read and write operations so there isn't this concept of a lengthy failover if one of the instances fails in a multi master cluster.
At a high level a multi master Aurora cluster might seem similar to a single master one.
The same cluster structure exists the same shared storage.
Multiple Aurora provisioned instances also exist in the cluster.
The differences start though with the fact that there is no cluster endpoint to use an application is responsible for connecting to instances within the cluster.
There's no load balancing across instances with a multi master cluster the application connects to one or all of the instances in the cluster and initiates operations directly.
So that's important to understand there is no concept of a load balanced endpoint for the cluster an application can initiate connections to one or both of the instances inside a multi master cluster.
Now the way that this architecture works is that when one of the read write nodes inside a multi master cluster receives a write request from the application it immediately proposes that data be committed to all of the storage nodes in that cluster.
So it's proposing that the data that it receives to write is committed to storage.
Now at this point each node that makes up a cluster either confirms or rejects the proposed change.
It rejects it if it conflicts with something that's already in flight for example another change from another application writing to another read write instance inside the cluster.
What the writing instance is looking for is a quorum of nodes to agree a quorum of nodes that allow it to write that data at which point it can commit the change to the shared storage.
If the quorum rejects it then it cancels the change with the application it generates an error.
Now assuming that it can get a quorum to agree to the write then that write is committed to storage and it's replicated across every storage node in the cluster just as it is with a single master cluster but and this is the major difference with a multi master cluster that change is then replicated to other nodes in the cluster.
This means that those other writers can add the updated data into their in-memory caches and this means that any reads from any other instances in the cluster will be consistent with the data that's stored on shared storage because instances cached data we need to make sure in addition to committing it to disk it's also updated inside any in-memory caches of any other instances within the multi master cluster.
So that's what this replication does once the instance on the right has got agreement to be able to commit that change to the cluster shared storage it replicates that change to the instance on the left the instance on the left updates it's in memory cache and then if that instance is used for any read operations it's always got access to the up-to-date data.
Now to understand some of the benefits of multi master mode let's look at a single master failover situation in this scenario we have an Aurora single master cluster with one primary instance performing reads and writes and one replica which is only performing read operations.
Now Bob is using an application and this application connects to this Aurora cluster using the cluster endpoint and the cluster endpoint at this stage points to the primary instance the cluster endpoint the one that's used for read and write operations always points at the primary instance.
If the primary instance fails then access to the cluster is interrupted so immediately we know that this application cannot be fault tolerant because access to the database is now disrupted.
At this point though the cluster will realize that there is a failure event and it will change the cluster endpoint to point at the replica which the cluster decides will be the new primary instance but this failover process takes time it's quicker than normal rds because each replica shares the cluster storage and they can be more replicas but it can take time.
The configuration change to make one of the other replicas the new primary instance inside the cluster is not an immediate change it causes disruption.
Now let's contrast this with multi master.
With multi master both instances are able to write to the shared storage they're both writers the application can connect with one or both of them and let's assume at this stage that it connects to both.
Both instances are capable of read and write operations the application could maintain connections to both and be ready to act if one of them fails but when that writer fails it could immediately just send a hundred percent of any future data operations to the writer which is working perfectly there would be little if any disruption.
If the application is designed in this way it's designed to operate through this failure the application could almost be described as fault tolerant so an Aurora multi master cluster is one component that is required in order to build a fault tolerant application it's not a guarantee and it's not always a thousand percent fault tolerant but it is the foundation of being able to build a fault tolerant application because the application can maintain connections to multiple writers at the same time.
Now in terms of the high level benefits it offers better and much faster availability the failover events can be performed inside the application and it doesn't even need to disrupt traffic between the application and the database because it can immediately start sending any write operations at another writer.
It can be used to implement fault tolerance but the application logic needs to manually load balance across the instances it's not something that's handled by the cluster with that being said though that's everything I wanted to cover in this lesson it's not something I expect to immediately feature in detail on the exam so we can keep it relatively brief go ahead complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to quickly cover the Aurora Global Database Product.
Now the name probably gives away the function, but to avoid any confusion, global databases allow you to create global level replication using Aurora from a master region to up to five secondary AWS regions.
Now this is one of the things which you just need an awareness of for the exam.
I don't expect it to feature heavily, but I want you to be aware of exactly what functionality Aurora Global Database provides.
So on to keep this lesson as brief as possible, let's quickly jump in and look at the architecture first.
So this is a common architecture that you might find when using Aurora Global Databases.
We've got an environment here which operates from two or more regions.
We've got a primary region, US East One on this example on the left.
This primary region offers similar functionality to a normal Aurora cluster.
It has one read and write instance and up to 15 read only replicas in that cluster.
Global databases introduce the concept of secondary regions and the example that's on screen is AP Southeast 2, which is the Sydney region on the right of your screen.
And these can have up to 16 replicas.
The entire secondary cluster is read only.
So in this example, all 16 replicas would be read only replicas.
The entire secondary cluster during normal operations is read only.
Now the replication from the primary region to secondary regions, that occurs at the storage layer.
And replication is typically within one second from the primary to all of the secondries.
Applications can use the primary instance in the primary region for write operations.
And then the replicas in the primary or the replicas in the secondary regions for read operations.
So that's the architecture.
But what's perhaps more important for the exam is when you would use global databases.
So let's have a look at that next.
Aurora Global Databases are great for cross region disaster recovery and business continuity.
So you can basically create a global database, set up multiple secondary regions.
And then if you do have a disaster, which affects an entire AWS region, then you can make these secondary clusters act as primary clusters so they can do read write operations.
So it offers a great solution for cross region disaster recovery and business continuity.
And because of the one second replication time between the primary region and secondary regions, it makes sure that both the RPO and RTO values are going to be really low.
If you do perform a cross region failover, they're also great for global read scaling.
So if you want to offer low latency to any international areas where you have customers, remember low latency generally equates to really good performance.
So if you want to offer low latency performance improvements to international customers, then you can create lots of secondary regions replicated from a primary region.
And then the application can sit in those secondary regions and just perform read operations against the secondary clusters.
And you provide your customers with great performance.
Again, it's important to understand that Aurora Global Databases, the replication occurs at the storage layer.
And it's generally around one second or even less between regions.
So from the primary region to all secondary regions.
It's also important to understand that this is one way replication from the primary to the secondary regions.
It is not bidirectional replication and replication has no impact on database performance because it occurs at the storage layer.
So no additional CPU usage is required to perform the replication tasks.
It happens at the storage layer.
Secondary regions can have 16 replicas.
If you think about Aurora, normally it can have one read and write primary instance and then up to 15 read replicas for a total of 16.
So it makes sense that secondary regions, because they don't have this read write primary instance, all of the replicas inside a secondary can be read replicas.
So it can have a total of 16 replicas per secondary region.
And all of these can be promoted to read write if you do have any disaster situations.
And currently there is a maximum of five secondary regions.
Though just like most things in AWS, this is likely to change.
Now again, for the exam, I don't expect this particular product to feature extensively, but I do want you to have an awareness so that when it does begin to be mentioned in the exam, or if you need to use it in production, you have a starting point by understanding the architecture.
With that being said, though, that is everything that I wanted to cover in this theory lesson.
So go ahead, complete the video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to get some experience of migrating a snapshot which you've previously taken from Aurora in provisioned mode and migrate this into Aurora running in serverless mode.
Now before we begin as always make sure that you're logged in to the general AWS account, so the management account of the organization and you'll need to have Northern Virginia selected.
Now attached to this lesson is a one-click deployment link and I'll need you to go ahead and open that to start the process.
Now once you've got this open we're going to need the Aurora snapshot name that you created in a previous demo.
So click on the services drop down and locate RDS.
It's probably going to be in the recently visited section if not you can search in the box at the top but go ahead and open that in a new tab.
Go to that tab and once it's loaded click on snapshots and you should have these two snapshots in your account.
The first snapshot is a4l wordpress -with -cat - post -mySQL57.
The other snapshot the one that we're going to use is this one so a4l wordpress -aura -with -cat -post.
Go ahead and select that entire snapshot name and copy that into your clipboard because we're restoring an Aurora provisioned snapshot into Aurora serverless.
We're not performing a migration we're performing a restore and so we don't need the snapshot ARN we need the snapshot name.
So go back to the cloud formation stack everything should be pre-populated but there's a box that you need to paste in the snapshot name that you'll be restoring so that's this one paste that in check the acknowledgement box at the bottom under capabilities and then create stack.
Now that process can take up to 45 minutes to complete sometimes it can be a little bit quicker and while that's working we're going to follow the same process through manually but we're going to stop before provisioning the Aurora serverless cluster.
So go back to the RDS tab make sure that you still have this snapshot selected then click on actions and then restore snapshot and I want to step through the options available when restoring an Aurora provisioned snapshot into Aurora serverless.
So these are the options you'll have when you're restoring an Aurora provisioned snapshot you'll see a list of compatible engines so anything compatible with the snapshot that you're restoring in our case it's only my SQL compatibility then you'll have to select your capacity type now it defaults to provisioned but we want to restore to a serverless cluster so we'll select serverless.
You need to select the version of Aurora serverless that you're restoring to and again it's only going to show you compatible versions in this case only 2.07.1 and that's why I was so precise with the version numbers when doing the demos earlier in this section.
Now under database identifier it's here where we would need to provide a unique identifier within this region inside this account for what we're restoring so we might use a4l wordpress-serveless we then need to provide connectivity information so we'd click in the VPC drop-down and make sure we select the animals for life VPC we'd still need to provide a database subnet group to use now currently there isn't one that exists in the account because the cloud formation template is still provisioning but we'd need to choose a relevant subnet group in this box we'd also need to choose a VPC security group which controls access to this database cluster then we have additional configuration and this is a feature which I'm going to be talking about in a dedicated lesson if you're doing the developer or sysops associate courses and this is an API which can be provisioned to give access to the data within this Aurora serverless cluster and it can do so in a way which is very lightweight and this makes it ideal for use with things like serverless applications which prefer a connectionless architecture so this is something that you will use if you want to use for example Aurora serverless with a serverless application based on lambda now something unique to Aurora serverless is the concept of capacity units and I've talked about these in the theory lesson where I talk about Aurora serverless these are the units of database service which the Aurora serverless cluster can make use of and you're able to set a minimum capacity unit and a maximum capacity unit and this provides a range of resources that this cluster can move between based on load on the cluster so as I've talked about in the theory lesson it will automatically provision more capacity or less capacity between these two values based on load now you have additional options for scaling and one that I'll be demonstrating a little bit later on in this demo lesson is how you can actually pause the compute capacity after a consecutive number of minutes of inactivity and this as long as your application supports it can actually reduce the amount of cost that you have running a database platform down to almost zero so you won't have any compute capacity build when the Aurora serverless cluster isn't in use and again I'll be demonstrating that very shortly in this demo lesson you're able to set encryption options just like with other forms of RDS and then under additional configuration you can also configure backup options now these options are obviously based on restoring a snapshot and you have a similar yet more extensive set of options if you're creating an Aurora serverless cluster from scratch so if we select Amazon Aurora and then we go down and select the serverless capacity type then obviously we can select from different versions and we have a wider range of options that we can set so the cluster identifier the admin username and password we've still got the capacity settings we still need to define connectivity options we've got additional configuration options around creating a database controlling the parameter group customizing backup options encryption and enabling deletion protection so whether you're restoring a snapshot or creating an Aurora serverless cluster from scratch these options are similar but you have access to slightly more configuration if you're creating a brand new cluster because when you're restoring a snapshot many of these configuration items are taken from that snapshot at this point we're not going to actually create the cluster manual so I'm going to cancel out of that and I'm going to refresh and as you can see we already have our Aurora serverless DB cluster and it's in an available state so let's go back to our cloud formation stack and refresh it's still in a create in progress state for the stack itself and in order to continue with this demo lesson we're going to need this to be in a create complete state so go ahead pause the video wait for your stack to move into a create complete state and then we can continue so this stacks now moved into a create complete state and we're good to continue so the first thing that I want to draw your attention to if we move back to the RDS console and then let's just refresh you'll see that this cluster is currently using two Aurora capacity units let's go inside the cluster we'll be able to see that it's available it's currently using two capacity units but otherwise it looks very similar to a provisioned Aurora cluster now what we're going to do is click on services open the EC2 console in a new tab go to instances running you should see a single WordPress instance so select that copy the public IP version 4 address into your clipboard making sure not to use this open address and open that in a new tab you'll see that it loads up the WordPress application and it still has the post within it that you created in the previous demo lesson the best cats ever and if you open this post you'll see that it doesn't have any of the attached images because remember they're not stored in the database they're stored on the local instance file system and that's something that we're going to rectify in an upcoming section of the course either called advanced storage or network storage depending on what course you're currently taking but I just wanted to demonstrate that all we've done is restore an Aurora provision snapshot into an Aurora serverless cluster and it still operates in the same way as Aurora provisioned but this is where things change if we go back to the RDS console we know that this Aurora serverless cluster makes use of Aurora capacity units or ACUs and currently it's set to move between one and two Aurora capacity units and the reason it's currently set to two is because we've just used it we've just restored an existing snapshot into this cluster and that operation comes with a relatively high amount of overhead so it needs to go to the two capacity units maximum in order to give us the best performance now what we're going to see over the next few minutes if we just sit here and keep refreshing this screen what should happen is that because we're not using our application first we're going to see it drop down from two capacity units to one capacity unit and that will of course reduce the costs for running this Aurora serverless cluster after a certain amount of time it's going to go from one capacity unit to zero capacity units because it's going to pause the cluster because of no usage we've got this configured if I click on the configuration tab to pause the compute capacity after a number of consecutive minutes of inactivity and it's set to five minutes so after five minutes of no usage on this database it's actually going to pause the compute capacity and we won't be incurring any costs for the compute side of this Aurora serverless cluster so that's one of the real benefits of Aurora serverless versus all of the other types of RDS database engine so let's just go ahead and refresh this and see if it's already changed from two capacity units it's currently still on two so let's select logs and events and refresh we don't see any events currently so this means that we've had no scaling events on this database but if we click on monitoring you'll see how the CPU utilization has decreased from around 25% to just over 5% and the database connection count has reduced from the one when we just accessed the application back down to zero after a few refreshers we'll see that it either decreases from two capacity units down to one or it will go straight to zero if we reach this five minute timer before it performs that scaling event to reduce from two to one so in our case we've skipped the point of having one capacity unit we've reached that five minute threshold where it pauses the compute capacity and so it's gone straight down to zero so your experience might vary it might go from two down to one and then pause or it might go from two straight down to zero but in a case for me my database is currently running at zero capacity units because this time frame has been reached with no activity and the compute has been paused so this means I have no costs for the compute side of Aurora serverless now if I go back to the application and do a refresh you'll see that we don't get a refresh straight away there's a pause and this is because now that the database cluster experiences some incoming load it's unpausing that compute it's resuming the compute part of the cluster and this isn't an immediate process so it's important to understand that when you implement an application and use this functionality the application does need to be able to tolerate lengthier connection times now sometimes in the case of WordPress you will see an error page when you attempt to do a refresh because a timeout value within WordPress is reached before the cluster can resume in the case of this demo lesson that didn't happen it was able to resume the cluster straight away and if we go back to the RDS console and then refresh this page we'll be able to see just how many capacity units this cluster is now operating with and it's operating with two capacity units now in production usage you could be a lot more granular and customize this based on the needs of your application in my case my minimum is one and my maximum is two and my pause time frame is a relatively low five minutes because I wanted to keep it simple for this demo lesson in production usage you might have a larger range between minimum and maximum you might have a higher minimum to be able to cope with a certain level of base load and the time frame between the last access and the pausing of the compute might be significantly longer than five minutes but this demonstration lesson is just that a demo and it's just designed to highlight this at a really high level so that when it comes to you using this in production you understand the architecture now that's everything that I wanted to cover in this demo lesson it's just been a brief bit of experience of using Aurora serverless now to tidy up to return the account into the same state as it was at the start of the demo lesson just go ahead and close down all of these tabs we need to go back to the cloud formation console make sure the Aurora serverless stack is selected and then just go ahead and click on delete and then delete stack and that will remove all of those resources returning the account into the same state as it was at the start of the demo now this whole section of the course has been around trying to improve the database part of our application so we've moved from having a database running on the same server as the application we've split that off we've moved it into rds and we've evolved that from my sequel rds through to Aurora provisioned and now to Aurora serverless we still have one major limitation with our application and that's that for any posts you make on the blog the media for those posts are stored locally on the instance file system and that's something that we're going to start tackling next in the course and we're going to be using the elastic file system product or EFS at this point though that's everything that I wanted to cover in this demo lesson go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover Aurora Serverless.
Aurora Serverless is a service which is to Aurora what Fargate is to ECS.
It provides a version of the Aurora database product where you don't need to statically provision database instances of a certain size or worry about managing those database instances.
It's another step closer to a database as a service product.
It removes one more piece of admin overhead, the admin overhead of managing individual database instances.
From now on when you're referring to the Aurora product that we've covered so far in the course you should refer to it as Aurora provisioned versus Aurora Serverless which is what we'll cover in this lesson.
With Aurora Serverless you don't need to provision resources in the same way as you did with Aurora provisioned.
You still create a cluster but Aurora Serverless uses the concept of ACUs or Aurora capacity units.
Capacity units represent a certain amount of compute and a corresponding amount of memory.
For a cluster you can set minimum and maximum values and Aurora Serverless will scale between those values adding or removing capacity based on the load placed on the cluster.
It can even go down to zero and be paused meaning that you're only billed for the storage that the cluster consumes.
Now billing is based on the resources that you use on a per second basis and Aurora Serverless provides the same levels of resilience as you're used to with Aurora provisioned.
So you get cluster storage that's replicated across six storage nodes across multiple availability zones.
Now some of the high-level benefits of Aurora Serverless it's much simpler, it removes much of the complexity of managing database instances and capacity, it's easier to scale, it seamlessly scales the compute and memory capacity in the form of ACUs as needed with no disruption to client connections and you'll see how that works architecturally on the next screen.
It's also cost effective when you use Aurora Serverless you only pay for the database resources that you consume on a per second basis.
Unlike with Aurora provisioned where you have to provision database instances in advance and you charge for the resources that they consume whether you're utilizing them or not.
The architecture of Aurora Serverless has many similarities with Aurora provisioned but it also has crucial differences so let's review both of those the similarities and the differences.
The Aurora cluster architecture still exists but it's in the form of an Aurora Serverless cluster.
Now this has the same cluster volume architecture which Aurora provisioned uses.
In an Aurora Serverless cluster though instead of using provisioned servers we have ACUs which are Aurora capacity units.
These capacity units are actually allocated from a warm pool of Aurora capacity units which are managed by AWS.
The ACUs are stateless, they're shared across many AWS customers and they have no local storage so they can be allocated to your Aurora Serverless cluster rapidly when required.
Now once these ACUs are allocated to an Aurora Serverless cluster they have access to the cluster storage in the same way that a provisioned Aurora instance would have access to the storage in a provisioned Aurora cluster.
It's the same thing it's just that these ACUs are allocated from a shared pool managed by AWS.
Now if the load on an Aurora Serverless cluster increases beyond the capacity units which are being used and assuming the maximum capacity setting of the cluster allows it then more ACUs will be allocated to the cluster.
And once the compute resource which represents this new potentially bigger ACU is active then any old compute resources representing unused capacity can be deallocated from your Aurora Serverless cluster.
Now because of the ACU architecture because the number of ACUs are dynamically increased and decreased based on load the way that connections are managed within an Aurora Serverless cluster has to be slightly more complex versus a provisioned cluster.
In an Aurora Serverless cluster we have a shared proxy fleet which is managed by AWS.
Now this happens transparently to you as a user of an Aurora Serverless cluster but if a user interacts with the cluster via an application it actually goes via this proxy fleet.
Any of the proxy fleet instances can be used and they will broker a connection between the application and the Aurora capacity units.
Now this means that because the client application is never directly connecting to the compute resource that provides an ACU.
It means that the scaling can be fluid and it can scale in or out without causing any disruptions to applications while it's occurring because you're not directly connecting with an ACU.
You're connecting via an instance in this proxy fleet.
So the proxy fleet is managed by AWS on your behalf.
The only thing you need to worry about for an Aurora Serverless cluster is picking the minimum and maximum values for the ACU and you only have a build for the amount of ACU that you're using at a particular point in time as well as the cluster storage.
So that makes Aurora Serverless really flexible for certain types of use cases.
Now a couple of examples of types of applications which really do suit Aurora Serverless.
The first is infrequently used applications.
Maybe a low volume blog site such as the best cats where connections are only attempted for a few minutes several times per day or maybe on really popular days of the week.
With Aurora Serverless if you were using the product to run the best cat pics blog which you'll experience in the demo lesson then you'd only pay for resources for the Aurora Serverless cluster as you consume them on a per second basis.
Another really good use case is new applications if you're deploying an application where you're unsure about the levels of load that will be placed on the application so you're going to be unsure about the size of database instance that you'll need.
With Aurora provisioned you would still need to provision that in advance and potentially change it which could cause disruption.
If you use Aurora Serverless you can create the Aurora Serverless cluster and have the database autoscale based on the incoming load.
It's also really good for variable workloads if you're running a normally lightly used application which has peaks maybe 30 minutes out of an hour or on certain days of the week during sale periods then you can use Aurora Serverless and have it scale in and out based on that demand.
You don't need to provision static capacity based on the peak or average as you would do with Aurora provisioned.
It's also really good for applications with unpredictable workloads so if you're really not sure about the level of workload at a given time of day you can't predict it you don't have enough data then you can provision an Aurora Serverless cluster and initially set a fairly large range of ACUs so the minimum is fairly low and the maximum is fairly high and then over the initial period of using the application you can monitor the workload and if it really does stay unpredictable then potentially Aurora Serverless is the perfect database product to use because if you're using anything else say an Aurora provisioned cluster then you always have to have a certain amount of capacity statically provisioned.
With Aurora Serverless you can in theory leave an unpredictable application inside Aurora Serverless constantly and just allow the database to scale in and out based on that unpredictable workload.
It's also great for development and test databases because Aurora Serverless can be configured to pause itself during periods of no load and during the database pause you only build for the storage so if you do have systems which are only used as part of your development and test processes then they can scale back to zero and only incur storage charges during periods when it's not in use and that's really cost effective for this type of workload and it's also great for multi-tenant applications if you've got an application where you're billing a user a set dollar amount per month per license to the application if your incoming load is directly aligned to your incoming revenue then it makes perfect sense to use Aurora Serverless.
You don't mind if a database supporting your product scales up and costs you more if you also get more customer revenue so it makes perfect sense to use Aurora Serverless for multi-tenant applications where the scaling is fairly aligned between infrastructure size and incoming revenue.
So these are some classic examples of when Aurora Serverless makes perfect sense.
Now this is a product I don't yet expect to feature extensively on the exam it will feature more and more as time goes on and so by learning the architecture at this point you get a head start and you can answer any questions which might feature on the exam about Aurora Serverless and comparing it to the other RDS products which is often just as important but at this point that's all of the theory that I wanted to cover all of the architecture so go ahead finish up this video and when you're ready I look forward to joining you in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I'm going to be covering the architecture of the Amazon Aurora managed database product from AWS.
I mentioned earlier that Aurora is officially part of RDS but from my perspective I've always viewed it as its own distinct product.
The features that it provides and the architecture it uses to deliver those features are so radically different than normal RDS, it needs to be treated as its own product.
So we've got a lot to cover so let's jump in and get started.
As I just mentioned the Aurora architecture is very different from normal RDS.
At its very foundation it uses the base entity of a cluster which is something that other engines within RDS don't have and a cluster is made up of a number of important things.
Firstly from a compute perspective it's made up of a single primary instance and then zero or more replicas.
Now this might seem similar to how RDS works with the primary and the standby replica but it's actually very different.
The replicas within Aurora can be used for reads during normal operations so it's not like the standby replica inside RDS.
The replicas inside Aurora can actually provide the benefits of both RDS multi AZ and RDS read replicas.
So they can be inside a cluster and they can be used to improve availability but also they can be used for read operations during the normal operation of a cluster.
Now that alone would be worth the move to Aurora since you don't have to choose between read scaling and availability.
Replicas inside Aurora can provide both of those benefits.
Now the second major difference in the Aurora architecture is its storage.
Aurora doesn't use local storage for the compute instances.
Instead an Aurora cluster has a shared cluster volume.
This is storage which is shared and available to all compute instances within a cluster.
This provides a few benefits such as faster provisioning, improved availability and better performance.
A typical Aurora cluster looks something like this.
It functions across a number of availability zones in this example A, B and C.
Inside the cluster is a primary instance and optionally a number of replicas.
And again these function as failover options if the primary instance fails.
But they can also be used during normal functioning of the cluster for read operations from applications.
Now the cluster has shared storage which is SSD based and it has a maximum size of 128TIB.
And it also has six replicas across multiple availability zones.
When data is written to the primary DB instance Aurora synchronously replicates that data across all of these six storage nodes spread across the availability zones which are associated with your cluster.
All instances inside your cluster so the primary and all of the replicas have access to all of these storage nodes.
The important thing to understand though from a storage perspective is that this replication happens at the storage level.
So no extra resources are consumed on the instances or the replicas during this replication process.
By default the primary instance is the only instance able to write to the storage and the replicas and the primary can perform read operations.
Because Aurora maintains multiple copies of your data in three availability zones the chances of losing data as a result of any disk related failure is greatly minimized.
Aurora automatically detects failures in the disk volumes that make up the cluster shared storage.
When a segment or a part of a disk volume fails Aurora immediately repairs that area of disk.
When Aurora repairs that area of disk it uses the data inside the other storage nodes that make up the cluster volume and it automatically recreates that data.
It ensures that the data is brought back into an operational state with no corruption.
As a result Aurora avoids data loss and it reduces any need to perform pointing time restores or snapshot restores to recover from disk failures.
So the storage subsystem inside Aurora is much more resilient than that which is used by the normal RDS database engines.
Another powerful difference between Aurora and the normal RDS database engines is that with Aurora you can have up to 15 replicas and any of them can be the failover target for a failover operation.
So rather than just having the one primary instance and the one standby replica of the non Aurora engines with Aurora you've got 15 different replicas that you can choose to fail over to.
And that failover operation will be much quicker because it doesn't have to make any storage modifications.
Now as well as the resiliency that the cluster volume provides there are a few other key elements that you should be aware of.
The cluster shared volume is based on SSD storage by default.
So it provides high IOPS and low latency.
It's high performance storage by default.
You don't get the option of using magnetic storage.
Now the billing for that storage is very different than with the normal RDS engines.
With Aurora you don't have to allocate the storage that the cluster uses.
When you create an Aurora cluster you don't specify the amount of storage that's needed.
Storage is simply based on what you consume.
As you store data up to the 128 TIB limit you'll build on consumption.
Now the way that this consumption works is that it's based on a high watermark.
So if you consume 50 GIB of storage you'll build for 50 GIB of storage.
If you free up 10 GIB of data so move down to 40 GIB of consumed data you'll still build for that high watermark of 50 GIB.
But you can reuse any storage that you free up.
What you'll build for is a high watermark, the maximum storage that you've consumed in a cluster.
And if you go through a process of significantly reducing storage and you need to reduce storage costs then you need to create a brand new cluster and migrate data from the old cluster to the new cluster.
Now it is worth mentioning that this high watermark architecture is being changed by AWS and this no longer is applicable for the more recent versions of Aurora.
Now I'm going to update this lesson once this feature becomes more widespread but for now you do still need to assume that this high watermark architecture is being used.
Now because the storage is for the cluster and not for the instances it means replicas can be added and removed without requiring storage provisioning or removal which massively improves the speed and efficiency of any replica changes within the cluster.
Having this cluster architecture also changes the access method versus RDS.
Aurora clusters like RDS clusters use endpoints.
So these are DNS addresses which are used to connect to the cluster.
Unlike RDS, Aurora clusters have multiple endpoints that are available for an application.
As a minimum you have the cluster endpoint and the reader endpoint.
The cluster endpoint always points at the primary instance and that's the endpoint that can be used for read and write operations.
The reader endpoint will also point at the primary instance if that's all that there is but if there are replicas then the reader endpoint will load balance across all of the available replicas and this can be used for read operations.
Now this makes it much easier to manage read scaling using Aurora versus RDS because as you add additional replicas which can be used for reads this reader endpoint is automatically updated to load balance across these new replicas.
You can also create custom endpoints and in addition to that each instance so the primary and any of the replicas have their own unique endpoint.
So Aurora allows for a much more custom and complex architecture versus RDS.
So let's move on and talk about costs.
With Aurora one of the biggest downsides is that there isn't actually a free tier option.
You can't use Aurora within the free tier and that's because Aurora doesn't support the micro instances that are available inside the free tier but for any instances beyond an RDS single AZ micro sized instance Aurora offers much better value.
For any compute that you use there's an hourly charge and you'll build per second with a 10 minute minimum.
For storage you'll build based on a gigabyte month consumed metric of course taking into account the high watermark so this is based on the maximum amount of storage that you've consumed during the lifetime of that cluster and as well there is an I/O cost per request made to the cluster shared storage.
Now in terms of backups you're given 100% of the storage consumption for the cluster in free backup allocation.
So if your database cluster is 100GIB then you're given 100GIB of storage for backups as part of what you pay for that cluster.
So for most situations for anything low usage or medium usage unless you've got high turnover in data unless you keep the data for long retention periods in most cases you'll find that the backup costs are often included in the charge that you pay for the database cluster itself.
Now Aurora provides some other really exciting features.
In general though backups in Aurora work in much the same way as they do in RDS.
So for normal backup features, for automatic backups, for manual snapshot backups this all works in the same way as any other RDS engine and restores will create a brand new cluster.
So you've experienced this in the previous demo lesson where you created a brand new RDS instance from a snapshot and this architecture by default doesn't change when you use Aurora.
But you've also got some advanced features which can change the way that you do things.
One of those is backtrack and this is something that needs to be enabled on a per cluster basis and it will allow you to roll back your database to a previous point in time.
So consider the scenario where you've got major corruption inside an Aurora cluster and you can identify the point at which that corruption occurred.
Well rather than having to do a restore to a brand new database at a point in time before that corruption if you enable backtrack you can simply roll back in place your existing Aurora cluster to a point before that corruption occurred.
And that means you don't have to reconfigure your applications you simply allow them to carry on using the same cluster it's just the data is rolled back to a previous state before the corruption occurred.
You need to enable this on a per cluster basis and you can adjust the window that backtrack will work for but this is a really powerful feature that's exclusive at the time of creating this lesson to Aurora.
You also have the ability to create what's known as a fast clone and a fast clone allows you to create a brand new database from an existing database but crucially it doesn't make a one-for-one copy of the storage for that database.
What it does is it references the original storage and it only stores any differences between those two.
Now differences can be either you update the storage in your cloned database or it can also be that data is updated in the original database which means that your clone needs a copy of that data before it was changed on the source.
So essentially your cloned database only uses a tiny amount of storage it only stores data that's changed in the clone or changed in the original after you make the clone and that means that you can create clones much faster than if you had to copy all of the data bit by bit and it also means that these clones don't consume anywhere near the full amount of data they only store the changes between the source data and the clone.
So I know that's a lot of architecture to remember.
I've tried to quickly step through all of the differences between Aurora and the other RDS engines.
You'll have lessons upcoming later in this section which deep dive into a little bit more depth of specific Aurora features that I think you will need for the exam but in this lesson I just wanted to provide a broad level overview of the differences between Aurora and the other RDS engines.
So in the next demo lesson you're going to get the opportunity to migrate the data for our WordPress application stack from the RDS MariaDB engine into the Aurora engine.
So you'll get some experience of creating an Aurora cluster and interacting with it with some data that you've migrated but at this point that's all of the theory that I wanted to cover.
So go ahead complete this video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to talk about a specific feature of RDS called RDS Custom.
Now this is a really niche topic.
I've yet to see it used in the real world and for the exams you really only need to have the most surface level understanding so I'm going to keep this really brief.
So RDS Custom fills the gap between the main RDS product and then EC2 running a database engine.
RDS is a fully managed database server as a service product.
Essentially it gives you access to databases running on a database server which is fully managed by AWS and so any OS or engine access is limited using the main RDS product.
Now databases running on EC2 they're self managed but this has significant overhead because done in this way you're responsible for everything from the operating system upwards.
So RDS Custom bridges this gap it gives you the ability to occupy a middle ground where you can utilize RDS but still get access to some of the customizations that you have access to when running your own DB engine on EC2.
Now currently RDS Custom works for MS SQL and Oracle and when you're using RDS Custom you can actually connect using SSH, RDP and session manager and actually get access to the operating system and database engine.
Now RDS Custom unlike RDS is actually running within your AWS account.
If you're utilizing normal RDS then if you look in your account you won't see any EC2 instances or EBS volumes or any backups within S3.
That's because they're all occurring within an AWS managed environment.
With RDS the networking works by injecting elastic network interfaces into your VPC.
That's how you get access to the RDS instance from a networking perspective but with RDS Custom everything is running within your AWS account so you will see an EC2 instance, you will see EBS volumes and you will see backups inside your AWS account.
Now if you do need to perform any type of customization of RDS Custom then you need to look at the database automation settings to ensure that you have no disruptions caused by the RDS automation while you're performing customizations.
You need to pause database automation, perform your customizations and then resume the automation so re-enable full automation and this makes sure that the database is ready for production usage.
Now again I'm skipping through a lot of these facts and talking only at a high level because realistically you're probably never going to encounter this in production and if you do have any exposure to it on the exam just knowing that it exists will be enough.
Now from a service model perspective this is how using RDS Custom changes things.
So on this screen anything that you see in blue is customer managed, anything that you see in orange is AWS managed and then anything that has a gradient is a shared responsibility.
So if you're using a database engine running on-premises then you're responsible for everything as the customer.
So application optimization, scaling, high availability, backups, any DB patches, operating system patches, operating system install and management of the hardware.
End-to-end that's your responsibility.
Now if you migrate to using RDS this is how it looks where AWS have responsibility for everything but application optimization.
Now if for whatever reason you can't use RDS then historically your only other option was to use a database engine running on EC2 and this was the model in that configuration.
So AWS handled the hardware but from an operating system installation perspective, operating system patches, database patches, backups, HA, scaling and application optimization they were still the responsibility at the customer.
So you only gained a tiny amount of benefit versus using an on-premises system.
With RDS custom we have this extra option where the hardware is AWS's responsibility, the application optimization is the customer responsibility but everything else is shared between the customer and AWS.
So this gives you some of the benefits of both.
It gives you the ability to use the RDS product and benefit from the automation while at the same time allowing you an increased level of customization and the ability to connect into the instance using SSH, session manager or RDP.
Now once again for the exam this is everything that you'll need to understand it only currently works for Oracle and MS SQL and for the real world you probably won't encounter this outside of very niche scenarios.
With that being said though that is everything I wanted to cover in this video so go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about data security within the RDS product.
I want to focus on four different things.
Authentication, so how users can log into RDS.
Authorization, how access is controlled.
Encryption in transit between clients and RDS.
And then encryption at rest, so how data is protected when it's written to disk.
Now we've got a lot to cover so let's jump in and get started.
With all of the different engines within RDS you can use encryption in transit which means data between the client and the RDS instance is encrypted via SSL or TLS and this can actually be set to mandatory on a per user basis.
Encryption at rest is supported in a few different ways depending on the database engine.
By default it's supported using KMS and EBS encryption.
So this is handled by the RDS host and the underlying EBS based storage.
As far as the RDS database engine knows it's just writing unencrypted data to storage.
The data is encrypted by the host that the RDS instance is running on.
KMS is used and so you select a customer master key or CMK to use.
Either a customer managed CMK or an AWS managed CMK and then this CMK is used to generate data encryption keys or DECs which are used for the actual encryption operations.
Now when using this type of encryption then the storage, the logs, the snapshots and any replicas are all encrypted using the same customer master key and importantly encryption cannot be removed once it's added.
Now these are features supported as standard with RDS.
In addition to KMS EBS based encryption Microsoft SQL and Oracle support TDE.
Now TDE stands for transparent data encryption and this is encryption which is supported and handled within the database engine.
So data is encrypted and decrypted within the database engine itself not by the host that the instance is running on and this means that there's less trust.
It means that you know data is secure from the moment it's written out to disk by the database engine.
In addition to this RDS Oracle supports TDE using cloud HSM and with this architecture the encryption process is even more secure with even stronger key controls because cloud HSM is managed by you with no key exposure to AWS.
It means that you can implement encryption where there is no trust chain which involves AWS and for many demanding regulatory situations this is really valuable.
Visually this is how the encryption architecture looks.
Architecturally let's say that we have a VPC and inside this a few RDS instances running on a pair of underlying hosts and these instances use EBS for underlying storage.
Now we'll start off with Oracle on the left which uses TDE and so cloud HSM is used for key services because TDE is native and handled by the database engine.
The data is encrypted from the engine all the way through to the storage with AWS having no exposure and outside of the RDS instance to the encryption keys which are used.
With KMS based encryption KMS generates and allows usage of CMKs which themselves can be used to generate data encryption keys known as DECays.
These data encryption keys are loaded onto the RDS hosts as needed and are used by the host to perform the encryption or decryption operations.
This means the database engine doesn't need to natively support encryption or decryption it has no encryption awareness.
From its perspective it's writing data as normal and it's encrypted by the host before sending it on to EBS in its final encrypted format.
Data that's transferred between replicas as with MySQL in this example is also encrypted as are any snapshots of the RDS EBS volumes and these use the same encryption key.
So that's at rest encryption and there's one more thing that I want to cover before we finish this lesson and that's IAM authentication for RDS.
Normally logins to RDS are controlled using local database users.
These have their own usernames and passwords they're not IAM users and are outside of the control of AWS.
One gets created when you provision an RDS instance but that's it.
Now you can configure RDS to allow IAM user authentication against a database and this is how.
We start with an RDS instance on which we create a local database user account configured to allow authentication using an AWS authentication token.
How this works is that we have IAM users and roles in this case an instance role and attached to those roles and users are policies.
These policies contain a mapping between that IAM entity so the user or role and a local RDS database user.
This allows those identities to run a generate DB auth token operation which works with RDS and IAM and based on the policies attached to the IAM identities it generates a token with a 15-minute validity.
This token can then be used to log in to the database user within RDS without requiring a password.
So this is really important to understand by associating a policy with an IAM user or an IAM role.
It allows either of those two identities to generate an authentication token which can be used to log into RDS instead of a password.
Now one really important thing to understand going into the exam is that this is only authentication.
This is not authorization.
The permissions over the RDS database inside the instance are still controlled by the permissions on the local database user.
So authorization is still handled internally.
This process is only for authentication which involves IAM and only if you specifically enable it on the RDS instance.
Now that's everything I wanted to cover about encryption in transit, encryption at rest as well as RDS IAM based authentication.
So thanks for watching go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Now the next thing that I want to demonstrate is how we can restore RDS if we have data corruption.
The way that we're going to simulate this is to go back to our WordPress blog and we're going to corrupt part of this data.
So we're going to change the title of this blog post from the best cats ever to not the best cats ever, which is clearly untrue.
But we're going to change this and this is going to be our simulation of data corruption of this application.
So go ahead and click on update to update the blog post with this new obviously incorrect data.
Now let's assume that we need to restore this database from an earlier snapshot.
Now let's ignore the automatic backup feature of RDS and just look at manual snapshots.
Well, let's move back to the RDS console and click on snapshots and we'll be able to see the snapshot that we created at the start of this demo lesson.
Remember, this does have the blog post contained within it in its original correct form.
Now to do a restore, we need to select this snapshot, click on actions and then restore snapshot.
Now I mentioned this in the theory lesson about backups and restores within RDS.
Restoring a snapshot actually creates a brand new database instance.
It doesn't restore to the existing one using normal RDS.
So we have to restore a snapshot.
Obviously the engine set to MySQL community and we're provided with an entry box for a brand new database identifier.
And we're going to use a4lwordpress-restore.
So this allows us to more easily distinguish between this and the original database instance.
We also need to decide on the deployment option.
So go ahead and select single DB instance.
This is only a demo, so we don't need to select multi-AZ DB instance.
We need to pick the type of instance that we're going to restore to.
And again, because this is a new instance, we're not limited to the previous free tier restrictions.
So we're able to select from any of the available instance types.
So go ahead and select burstable classes and then pick either t2 or t3.micro.
We'll leave storage as default.
We'll need to provide the VPC to provision this new database instance into.
So we'll make sure that a4l-vpc1 is selected and we'll use the same subnet group that was created by the one-click deployment, which you used at the start of this demo.
You're allowed to choose between public access yes or no.
We'll choose no.
You'll have to pick a VPC security group to use for this RDS instance.
Now the one-click deployment did create one, so click in the drop-down and select the RDS multi-AZ snap RDS security group.
So not the instance security group, but the RDS security group.
Once you've selected that, then delete default, scroll down.
You can specify database authentication and encryption settings.
And again, if applicable in the course that you're studying, I'll be covering these in a separate lesson.
We'll leave all of that as default and click on restore DB instance.
Now this is going to begin the process of restoring a brand new database instance from that snapshot.
Now the important thing that you need to understand is this is a brand new instance.
We're not restoring the snapshot to the same database instance.
Instead, it's creating a brand new one.
Now when this finishes restoring, when it's available for use, if we want our application to make use of it, and the restored non-corrupted data, then we're going to need to change the application to point at this newly restored database.
So at this point, go ahead and pause the video because for the next step, which is to adjust the WordPress configuration, we need this database to be in an available state.
So pause the video, wait for the status to change from creating all the way through to available, and then we're good to continue.
Okay, so the snapshot restore is now completed and we have a brand new database instance, A4LWordPress-Restore.
And in my case, it took about 10 minutes to perform that restoration.
Now just to reiterate this concept, because it's really important, it features all the time in the exams, and you'll need this if you operate in the real world using AWS.
If we go into the original RDS instance, just pay attention to this endpoint DNS name.
So we have a standard part, which is the region, and then .rds, and then .amazonaws.com.
Before this, though, we have this random part.
Now this represents the name of the database instance as well as some random.
If we go back to the databases list and then go into the restored version, now we can see that we have A4LWordPress-Restore.
And this is different than that original database endpoint name for the original database.
So the really important, the critical thing to understand is that a restore with a normal RDS will create a brand new database instance.
It will have a brand new database endpoint DNS name, the CNAME, and you will need to update any application configuration to use this brand new database.
So go ahead and just leave this open in this tab because we'll be needing it very shortly.
Click on Services, find EC2, and open that in a new tab.
So as a reminder, if we go back to the WordPress tab and just hit Refresh, we can see that we still have the corrupt data.
Now what we want to do is point WordPress at the restored correct database.
So to do that, go to the EC2 tab that you just opened, right click on the A4LWordPress instance, select Connect.
We're going to use Instance Connect, so choose that to make sure the username is EC2-user and then connect to the instance.
This process should be familiar by now because we're going to edit the WordPress configuration file.
So cd/var/www/html, then we'll do a listing ls-la, and we want to edit the configuration file which is wp-config.php, so shudu, space, nano which is the text editor, space, wp-config.php.
Once we're in this file, just scroll down and again we're looking for the dbhost configuration which is here.
Now this DNS name you'll recognize is pointing at the existing database with the corrupt data.
So we need to delete all of this just to leave the two single quotes.
Make sure your cursor's over the second quote.
Go back to the RDS console and we need to locate the DNS name for the A4LWordPress-Restore instance.
Remember this is the one with the correct data.
So copy that into your clipboard, go back to EC2 and paste that in, and then Ctrl+O and Enter to save, and Ctrl+X to exit.
That's all of the configuration changes that we need.
If we go back to the WordPress application and hit refresh, we'll see that it's now showing the correct post, the best cats ever, because we're now pointing at this restored database instance.
So the key part about this demo lesson really is to understand that when you're restoring a normal RDS snapshot, you're restoring it to a brand new database instance, its own instance with its own data and its own DNS endpoint name.
So you have to update your application configuration to point at this new database instance.
With normal RDS, it's not possible to restore in place.
You have to restore to a brand new database instance.
Now this is different with a feature of Aurora which I'll be covering later in this section, but for normal RDS, you have to restore to a brand new instance.
So those are the features which I wanted to demonstrate in this demo lesson.
I wanted to give you a practical understanding of the types of recovery options and resilience options that you have available using the normal RDS version, so MySQL.
Now different versions of RDS such as Microsoft SQL, PostgreSQL, Oracle, and even AWS specific versions such as Aurora and Aurora Serverless, they all have their own collections of features.
For the exam and for most production usage, you just need to be familiar with a small subset of those.
Generally, you'll either be using Oracle, MSSQL, or one of the open source or community versions, so you'll only have to know the feature set of a small subset of the wider RDS product.
So I do recommend experimenting with all of the different features and depending on the course that you're taking, I will be going into much more depth on those specific features elsewhere in this section.
For now though, that is everything that I wanted to talk about, so all that remains is for us to tidy up the infrastructure that we've used in this demo lesson.
So go to databases.
I want you to select the A4L WordPress -Restore instance because we're going to delete this fully.
We're not going to be using this anymore in this section of the course, so select it, click on the Actions drop down, and then select Delete.
Don't create a final snapshot.
We don't need that.
Don't retain automated backups and because we don't choose either of these, we need to acknowledge our understanding of this and type Delete Me into this box.
So do that and then click on Delete.
Now that's going to delete that instance as well as any snapshots created as part of that instance.
So if we go to Snapshots, we only have the one manual snapshot.
If we go to System Snapshots, we can see that we have one snapshot for this Restore database, and if you're deleting a database instance, then any system created snapshots for that database instance will also be deleted either immediately or after the retention period expires.
So those will be automatically cleared up as part of this deletion process.
We're not going to delete the manual snapshot that we created at the very start of this lesson with the catpost in because we're going to be using this elsewhere in the course.
So leave this in place.
Click on Databases again.
We're going to need to wait for this Restored Database instance to finish deleting before we can continue.
So go ahead and pause the video, wait for this to disappear from the list, and then we can continue.
Okay, so that Restored Database instance has completed deleting.
So now all that remains is to move back to the CloudFormation console.
You should still have a tab open.
Select the stack deployed as part of the one-click deployment.
It should be called RDS Multi-AZ Snap.
Select Delete and then confirm that deletion, and that will clear up all of the infrastructure that we've used in this demo lesson.
It will return the account into the same state as it was at the start of this demo with one exception.
And that one exception is the snapshot that we created of the RDS instance as part of this deployment.
So that's everything you need to do in this demo lesson.
I hope you've enjoyed it.
I know it's been a fairly long one where you've been waiting a lot of the time in the demo for things to happen, but it's important for the exam and real-world usage that you get the practical experience of working with all of these different features.
So you should leave this demo lesson with some good experience of the resilience and recovery features available as part of the normal RDS product.
Now at this point, that's everything you need to do, so go ahead and complete this video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson we're going to continue implementing this architecture.
So in the previous demo lesson you migrated a database from a self-managed MariaDB running on EC2 into RDS.
In this demo lesson you're going to get the experience working with RDS's multi-availability zone mode as well as creating snapshots, restoring those snapshots and experimenting with RDS failover.
Now in order to complete this demo lesson you're going to need some infrastructure.
So let's move across to our AWS console.
You need to be logged in to the general AWS account.
So that's the management account of the organization and as always make sure that you have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link so go ahead and open that.
This will take you to a quick create stack page and everything should be pre-populated and ready to go.
So the stack name is RDS multi-AZ snap.
All of the parameters have default values.
Multi-AZ is currently set to false so leave that at false, check the capabilities box at the bottom and then click on create stack.
Now this infrastructure will take about 15 minutes to apply and we need it to be in a create complete state before we continue.
So go ahead, pause the video and resume it once CloudFormation has moved into a create complete state.
Okay so now that this stack has moved into a create complete state we need to complete the installation of WordPress and add our test blog post because we're going to be using those throughout this demo lesson.
Now this is something that you've done a number of times before so we can speed through this.
So click on the services drop down, move to the EC2 console.
We need to go to running instances and we'll need the public IP version 4 address of A4L-WordPress.
So go ahead and copy the public IP version 4 address into your clipboard.
Don't use this open address.
Open that in a new tab.
We'll be calling the site as always the best cats for username put admin for the password.
We'll be using the animals for life strong password and then as always test at test.com for the email address.
Enter all of that and click on install WordPress.
Then you'll need to log in admin for the username and to the password click on login.
Once we logged in go to posts click on trash under hello world to delete the existing post and then add a new post.
Close down this dialogue for the title of the post the best cats ever click on the plus select gallery.
At this point go ahead and click the link that's attached to this lesson to download the blog images.
Once downloaded extract that zip file and you'll get four images.
Once you've got those images ready click on upload locate those images select them and click on open and that will add those to the post.
Once they're fully loaded in we can go ahead and click on publish and then publish again and that will publish this post to our blog.
And as a reminder that stores these images on the local instance file system and adds the post metadata to the database and that's now running within RDS.
Now I want to step through a few pieces of functionality of RDS and I want you for a second to imagine that this blog post is actually a production enterprise application.
Maybe a content management system and I want to view all of the actions that we perform in this demo lesson through the lens of this being a production application.
So go ahead and return to the AWS console click on services and we're going to move back to RDS.
The first thing that we're going to do is to take a snapshot of this RDS instance.
So just close down any additional dialogues that you see go to databases.
Then I want you to select the database that's been created by the one click deployment link that you used at the start of this demo lesson.
Then select actions and then we're going to take a snapshot.
Now a snapshot is a point in time copy of the database.
When you first do a snapshot it takes a full copy of that database so it consumes all of the capacity of the data that's being used by the RDS instance.
So this initial snapshot is a full snapshot containing all of the data within that database instance.
Now we're going to take a snapshot and we're going to call it a four L wordpress hyphen with hyphen cat hyphen post hyphen mySQL hyphen and then the version number without any dots or spaces.
Now depending on when you're watching this video doing this lesson you might have been using a different version of SQL.
And so in the lesson description for this lesson I've included the name of the snapshot that you need to use.
So go ahead and check that now and include that in this box.
So that informs us what it is, what it contains and the version number that this snapshot refers to.
So go ahead and enter that and then click on take snapshot and that's going to begin the process of creating this snapshot.
Now the process takes a variable amount of time.
It depends on the speed of AWS on that particular day.
It depends on the amount of data contained within the database and it also depends on whether this is the first snapshot or a subsequent snapshot.
Now the way that snapshots work within AWS is the first snapshot contains a full copy of all of the data of the thing being snapshotted and any subsequent snapshot only contains the blocks of data which have changed from that last previous successful snapshot.
So of course the first snapshot always takes the longest and everything else only takes the amount of time required to copy the changed data.
So if we just give this a few minutes let's keep refreshing.
Mine's still reporting at 0% complete so we need to allow this to complete before we move on.
So go ahead and pause the video and resume it once your snapshot has completed.
And there we go our snapshots now moved into an available status and the progress has completed.
And in my case that took about five minutes to complete from start to finish.
So again just to reiterate this snapshot has been taken.
It's a copy of an RDS MySQL database of a particular version and it contains our WordPress database together with the cat post that we just added.
And that's important to keep in mind as we move on with the demo lesson.
Now you could go ahead and take another snapshot and this one would be much quicker to complete.
It would only contain any data changed between the point that you take it and when you took this previous snapshot.
I'm not going to demonstrate that in this video but you can do that.
And for production usage you may use snapshots in addition to the normal automated backups provided by RDS.
Snapshots that you take manually live past the life cycle of the RDS instance.
And if you want to tidy them up you have to do that manually or by using scripts that you create.
So snapshots that are taken manually are not managed by RDS in any way.
And that's important to understand from a DR and the cost management perspective.
Now the next thing that I want to demonstrate is the multi AZ mode of RDS.
So if we go back to the RDS console just expand this menu and go to databases.
Currently this database is using a single RDS instance.
So this RDS instance is not resilient to the failure of an availability zone within this region.
Now to change that process we can provision a standby replica in another availability zone and that's known as multi AZ.
Now it's worth noting that this is not included within the AWS free tier.
So there will be a small charge to do this optional step to enable multi AZ mode.
Make sure that you have the database instance selected and then click on modify.
Now it's on this screen that we can change a lot of the options which relate to this entire RDS instance.
We've got the option to adjust the database identifier, provide a new database admin password.
We can change the DB instance size or type if we want.
We can adjust the amount of storage available to the database instance, even enable storage auto scaling.
But what we're looking for specifically is adjusting the availability and durability settings.
Currently this is set to do not create a standby instance and we're going to modify this.
We're going to change it to create a standby instance and this is something that's recommended for any production usage.
This creates a standby replica in a different availability zone.
So it picks another availability zone, specifically another subnet that's available within the database subnet group that was created by the one click deployment.
So we're going to set that option and scroll down and then select continue.
Now because we have a maintenance window defined on this RDS instance, we have two different options of when to apply this change.
We can either apply the change during the next scheduled maintenance window.
Remember, this is a definable value that you can set when you create an RDS instance or you modify its settings.
Or we can specify that we want to apply immediately the change that we're making.
And for this demo lesson, that's what we're going to do.
Now it does warn you that any changes could cause a performance impact and even an outage.
So it's really important that if you are applying changes immediately, you understand the impact of those changes.
So make sure that you have apply immediately selected and then click on modify DB instance.
Now a multi AZ deployment is essentially an automatic standby replica in a separate availability zone.
What happens behind the scenes is that the primary database instance is synchronously replicated into this standby replica inside a different availability zone.
Now this provides a few benefits.
It provides data redundancy benefits.
It means that any operations which can interrupt IO such as system backups will occur from the standby replica.
So won't impact the primary database and that provides a real advantage for production RDS deployments.
But the main reason beyond performance is that it helps protect any databases in the primary instance against failure of an availability zone.
So if the availability zone of the primary instance fails and then the C name of the database will be changed to point at the standby replica.
And that will minimize any disruption to your application and its users.
Now if we just hit refresh, we can see the status is modifying and what's happening behind the scenes is AWS are taking a snapshot of the primary DB instance.
It's restoring that snapshot into the standby replica, which is in a different availability zone.
And then it's setting up synchronous replication between the primary and the standby replica.
So this is a process which happens behind the scenes.
But it does mean that we need to wait for this process to be complete until the process is complete.
This is not a multi AZ deployment.
So go ahead and pause the video and wait for the status to change away from modifying.
We need this to be in an available state in order to continue with the demo.
So go ahead and pause the video and resume it once this modification has completed.
Okay, so the status has now changed to available.
And in my case, it took about 10 minutes to enable multi AZ mode.
So that's the provisioning of a standby replica in another availability zone.
Now, the likelihood of an AZ failure happening while I'm recording this demo lesson is relatively small, but we can simulate a failure to do that.
If we have the database instance selected and then select the actions drop down and then reboot, we can use the option reboot with failover.
If we choose this option, then part of the process is that a simulated failover occurs.
So the C name, the database endpoint, that's moved so that it now points at the standby replica and then the old primary instance is restarted.
So that's what we're going to do to simulate this process.
So go ahead and select to reboot the database instance.
Make sure that you have reboot with failover selected and then click on confirm.
And this will begin the process of rebooting the database instance.
Now, if we go back to the WordPress blog and we click on view post, you'll see that right away it's not immediately loading.
And that's because the failover from the primary to the standby isn't immediate.
Failover times are typically 60 to 120 seconds.
So that's important to keep in mind if you're deploying RDS in a business critical situation.
It doesn't offer immediate failover.
So let's just stop this and hit reload again.
And now we can see that the page is starting to load because the C name for the database has been moved from pointing at the primary to pointing at the standby replica, which is the new primary.
Okay, so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead, complete the video and when you're ready, join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this video I want to talk about RDS read replicas.
Now read replicas provide a few main benefits to us as solutions architects or operational engineers.
They provide performance benefits for read operations, they help us create cross-region failover capability, and they provide a way for RDS to meet really low recovery time objectives, just as long as data corruption isn't involved in a disaster scenario.
Now let's step through the key concepts and architectures because they're going to be useful for both the exam and the real world.
Read replicas, as the name suggests, are read-only replicas of an RDS instance.
Unlike MultiAZ, where you can't by default use the standby replica for anything, you can use read replicas but only for read operations.
Now MultiAZ running in cluster mode, which is the newer version of MultiAZ, is like a combination of the old MultiAZ instance mode together with read replicas.
But, and this is really important, you have to think of read replicas as separate things.
They aren't part of the main database instance in any way.
They have their own database endpoint address and so applications need to be adjusted to use them.
An application, say WordPress, using an RDS instance will have zero knowledge of any read replicas by default.
Without application support, read replicas do nothing.
They aren't functional from a usage perspective.
There's no automatic failover, they just exist off to one side.
Now they're kept in sync using a synchronous replication.
Remember MultiAZ uses synchronous replication and that means that when data is written to the primary instance, at the same time as storing that data on disk on the primary, it's replicated to the standby.
And conceptually think of this as a single write operation, both on the primary and on the standby.
With asynchronous, data is written to the primary first at which point it's viewed as committed.
Then after that it's replicated to the read replicas and this means in theory there could be a small lag, maybe seconds, but it depends on network conditions and how many writes occur on the database.
For the exam for any RDS questions and exclude Aurora for now, remember that synchronous means MultiAZ and asynchronous means read replicas.
Read replicas can be created in the same region as the primary database instance or they can be created in other AWS regions known as cross region read replicas.
If you create a cross region read replica, then AWS handle all of the networking between regions and this occurs transparently to you and it's fully encrypted in transit.
Now why do read replicas matter?
Well there are two main areas of importance that I want you to think about.
First is read performance and read scaling for a database instance.
So you can create five direct read replicas per database instance and each of these provides an additional instance of read performance.
So this offers a simple way of scaling out your read performance on a database.
Now read replicas themselves can also have their own read replicas but this means that lag starts to become a problem because asynchronous replication is used.
There can be a lag between the main database instance and any read replicas and if you then create read replicas of read replicas then this lag becomes more of a problem.
So while you can use multiple levels of read replicas to scale read performance even more lag does start to become even more of a problem.
So you need to take that into consideration.
Additionally read replicas can help you with global performance improvements for read workloads.
So if you have read workloads in other AWS regions then these workloads can directly connect to read replicas and not impact the performance at the primary instance in any way.
In addition read replicas benefit us in terms of recovery point objectives and recovery time objectives.
So snapshots and backups improve RPOs the more frequent snapshots occur and the better backups are this offers improved recovery point objectives because it limits the amount of data which can be lost but it doesn't really help us for recovery time objectives because restoring snapshots takes a long time especially for large databases.
Now read replicas offer a near zero RPO and that's because the data that's on the read replica is synced from the main database instance.
So there's very little potential for data loss assuming we're not dealing with data corruption.
Read replicas can be promoted quickly they offer a near zero RPO.
So in a disaster scenario where you have a major problem with your RDS instance you can promote a read replica and this is a really quick process but and this is really important you should only look at using read replicas during disaster recovery scenarios when you're recovering from failure.
If you're recovering from data corruption then logically the read replica will probably have a replica of that corrupted data.
So read replicas are great for achieving low RTOs but only for failure and not for data corruption.
Now read replicas are read only until they're promoted and when they're promoted you're able to use them as a normal RDS instance.
There's also a really simple way to achieve global availability improvements and global resilience because you can create a cross region read replica in another AWS region and use this as a failover region if AWS ever have a major regional issue.
Now at this point that's everything I wanted to cover about read replicas.
If appropriate for the exam that you're studying I might have another lesson which goes into more technical depth or a demo lesson which allows you to experience this practically.
If you don't see either of these then don't worry they're not required for the exam that you're studying.
At this point though that's everything I'm going to cover so go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to talk about how RDS can be backed up and restored, as well as covering the different methods of backup that we have available.
Now we do have a lot to cover, so let's jump in and get started.
Within RDS there are two types of backup-like functionality.
We have automated backups and we have snapshots.
Now both of these are stored in S3, but they use AWS managed buckets, so they won't be visible to you within your AWS console.
You can see backups in the RDS console, but you can't move to S3 and see any form of RDS bucket, which exists for backups.
Keep this in mind because I've seen questions on it in the exam.
Now the benefits of using S3 is that any data contained in backups is now regionally resilient, because it's stored in S3, which replicates data across multiple AWS availability zones within that region.
Now RDS backups, when they do occur, are taken in most cases from the standby instance, if you have multi-AZ enabled.
So while they do cause an I/O pause, this occurs from the standby instance, and so there won't be any application performance issues.
If you don't use multi-AZ, for example with test and development instances, then the backups are taken from the only available instance, so you may have pauses in performance.
Now I want to step through how backups work in a little bit more detail, and I'm going to start with snapshots.
Snapshots aren't automatic.
They're things that you run explicitly or via a script or custom application.
You have to run them against an RDS database instance.
They're stored in S3, which is managed by AWS, and they function like the EBS snapshots that you've covered elsewhere in the course.
Snapshots and automated backups are taken of the instance, which means all the databases within it, rather than just a single database.
The first snapshot is a full copy of the data stored within the instance, and from then on, snapshots only store data which has changed since the last snapshot.
When any snapshot occurs, there is a brief interruption to the flow of data between the compute resource and the storage.
If you're using single AZ, this can impact your application.
If you're using multi AZ, this occurs on the standby, and so won't have any noticeable effect.
Time-wise, the initial snapshot might take a while.
After all, it's a full copy of the data.
From then on, snapshots will be much quicker because only changed data is being stored.
Now the exception to this are instances where there's a lot of data change.
In this type of scenario, snapshots after the initial one can also take significant amounts of time.
Snapshots don't expire.
You have to clear them up yourself.
It means that snapshots live on past when you delete the RDS instance.
Again, they're only deleted when you delete them manually or via some external process.
Remember that one because it matters for the exam.
Now you can run one snapshot per month, one per week, one per day, one per hour.
The choice is yours because they're manual.
And one way that lower recovery point objectives can be met is by taking more frequent snapshots.
The lower the time frame between snapshots, the lower the maximum data loss that can occur when you have a failure.
Now this is assuming we only have snapshots available, but there is another part to RDS backups, and that's automated backups.
These occur once per day, but the architecture is the same.
The first one is a full, and any ones which follow only store changed data.
So far you can think of them as though they're automated snapshots, because that's what they are.
They occur during a backup window which is defined on the instance.
You can allow AWS to pick one at random or use a window which fits your business.
If you're using single AZ, you should make sure that this happens during periods of little to no use, as again there will be an IO pause.
If you're using multi AZ, this isn't a concern as the backup occurs from the standby.
In addition to this automated snapshot, every five minutes, database transaction logs are also written to S3.
Transaction logs store the actual operations which change the data, so operations which are executed on the database.
And together with the snapshots created from the automated backups, this means a database can be restored to a point in time with a five minute granularity.
In theory, this means a five minute recovery point objective can be reached.
Now automated backups aren't retained indefinitely.
They're automatically cleared up by AWS, and for a given RDS instance, you can set a retention period from zero to 35 days.
Zero means automated backups are disabled and the maximum is 35 days.
If you use a value of 35 days, it means that you can restore to any point in time over that 35 day period using the snapshots and transaction logs, but it means that any data older than 35 days is automatically removed.
When you delete the database, you can choose to retain any automated backups, but, and this is critical, they still expire based on the retention period.
The way to maintain the contents of an RDS instance past this 35 day max retention period is that if you delete an RDS instance, you need to create a final snapshot, and this snapshot is fully under your control and has to be manually deleted as required.
Now RDS also allows you to replicate backups to another AWS region, and by backups, I mean both snapshots and transaction logs.
Now charges apply for both the cross region data copy and any storage used in the destination region, and I want to stress this really strongly.
This is not the default.
This has to be configured within automated backups.
You have to explicitly enable it.
Now let's talk a little bit about restores.
The way RDS handles restores is really important, and it's not immediately intuitive.
It creates a new RDS instance when you restore an automated backup or a manual snapshot.
Why this matters is that you will need to update applications to use the new database end point address because it will be different than the existing one.
When you restore a manual snapshot, you're restoring the database to a single point in time.
It's fixed to the time that the snapshot was created, which means it influences the RPO.
Unless you created a snapshot right before a failure, then chances are the RPO is going to be suboptimal.
Automated backups are different.
With these, you can choose a specific point to restore the database to, and this offers substantial improvements to RPO.
You can choose to restore to a time which was minutes before a failure.
The way that it works is that backups are restored from the closest snapshot, and then transaction logs are replayed from that point onwards, all the way through to your chosen time.
What's important to understand though is that restoring snapshots isn't a fast process.
If appropriate for the exam that you're studying, I'm going to include a demo where you'll get the chance to experience this yourself practically.
It can take a significant amount of time to restore a large database, so keep this in mind when you think about disaster recovery and business continuity.
The RDS restore time has to be taken into consideration.
Now in another video elsewhere in this course, I'm going to be covering read replicas, and these offer a way to significantly improve RPO if you want to recover from failure.
So RDS automated backups are great as a recovery to failure, or as a restoration method for any data corruption, but they take time to perform a restore, so account for this within your RTO planning.
Now once again, if appropriate for the exam that you're studying, you're going to get the chance to experience a restore in a demo lesson elsewhere in the course, which should reinforce the knowledge that you've gained within this theory video.
If you don't see this then don't worry, it's not required for the exam that you're studying.
At this point though, that is everything I wanted to cover in this video, so go ahead and complete the video, And when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to talk through the ways in which RDS offers high availability.
Historically there was one way, multi-AZ.
Over time RDS has been improved and now there's multi-AZ instance deployments and multi-AZ cluster deployments.
And these offer different benefits and trade-offs and so in this video I want to step through the architecture and functionality of both.
Now we do have a lot to cover so let's jump in and get started straight away.
Historically the only method of providing high availability which RDS had was multi-AZ.
So again this is now called multi-AZ instance deployment.
With this architecture RDS has a primary database instance containing any databases that you create and when you enable multi-AZ mode this primary instance is configured to replicate its data synchronously to a standby replica which is running in another availability zone.
And this means that this standby also has a copy of your databases.
Now in multi-AZ instance mode this replication is at the storage level.
This is actually less efficient than the cluster multi-AZ architecture but more on this later in this video.
The exact method that RDS uses to do this replication depends on the database engine that you pick.
MariaDB, MySQL, Oracle and PostgreSQL use Amazon failover technology whereas Microsoft's SQL instances use SQL server database mirroring or always on availability groups.
In any case this is abstracted away all you need to understand is that it's a synchronous replica.
Now architecturally how this works is that all accesses to the databases are via the database CNAME.
This is a DNS name which by default points at the primary database instance.
With multi-AZ instance architecture you always access the primary database instance.
There's no access to the standby even for things like reads.
Its job is to simply sit there until you have a failure scenario with the primary instance.
Other things though such as backups can occur from the standby so data is moved into S3 and then replicated across multiple availability zones in that region.
Now this places no extra load on the primary because it's occurring from the standby.
And remember this is important because all accesses so reads and writes from this multi-AZ architecture will occur to and from the primary instance.
Now in the event that anything happens to the primary instance this will be detected by RDS and a failover will occur.
This can be done manually if you're testing or if you need to perform maintenance but generally this will be an automatic process.
What happens in this scenario is the database CNAME changes instead of pointing at the primary it points at the standby which becomes the new primary.
Because this is a DNS change it generally takes between 60 to 120 seconds for this to occur so there can be brief outages.
This can be reduced by removing any DNS caching in your application for this specific DNS name.
If you do remove this caching it means that the second RDS has finished the failover and the DNS name has been updated.
Your application will use this name which is now pointing at the new primary instance.
So this is the architecture when you're using the older multi-AZ instance architecture.
I want to cover a few key points of this architecture before we look at how multi-AZ cluster architecture works.
So let's move on.
So just to summarize replication between primary and standby is synchronous and what this means is that data is written to the primary and then immediately replicated to the standby before being viewed as committed.
Now multi-AZ does not come within the free tier because of the extra cost for the standby replica that's required.
And multi-AZ with the instance architecture means that you only have one standby replica and that's important.
It's only one standby replica and this standby replica cannot be used for reads or writes.
Its job is to simply sit there and wait for failover events.
A failover event can take anywhere from 60 to 120 seconds to occur and multi-AZ mode can only be within the same region.
So different availability zones within the same AWS region.
Backups can be taken from the standby replica to improve performance and failovers will occur for various different reasons such as availability zone outage, the failure of the primary instance, manual failover, instance type change so when you change the type of the RDS instance and even when you're patching software.
So you can use failover to move any consumers of your database onto a different instance, patch the instance which has no consumers and then flip it back.
So it does offer some great features which can help you maintain application availability.
Now next I want to talk about multi-AZ using a cluster architecture.
And when you watch the Aurora video you might be confused between this architecture and Amazon Aurora.
So I'm going to stress the differences between multi-AZ cluster for RDS and Aurora in this video.
And this is to prepare you for when you watch the Aurora video.
It's really critical for you to understand the differences between multi-AZ cluster mode for RDS and Amazon Aurora.
So we start with a similar VPC architecture only now in addition to the single client device on the left.
I'm adding two more.
In this mode RDS is capable of having one writer replicate to two reader instances.
And this is a key difference between this and Aurora.
With this mode of RDS multi-AZ you can have two readers only.
These are in different availability zones than the writer instance but there will only be two whereas with Aurora you can have more.
The difference between this mode of multi-AZ and the instance mode is that these readers are usable.
You can think of the writer like the primary instance within multi-AZ instance mode in that it can be used for writes and read operations.
The reader instances unlike multi-AZ instance mode these can be utilized while they're in this state.
They can be used only for read operations.
This will need application support since your application needs to understand that it can't use the same instance for reads and writes.
But it means that you can use this multi-AZ mode to scale your read workloads unlike multi-AZ instance mode.
Now in terms of replications between the writer and the readers while data is sent to the writer and it's viewed as being committed when at least one of the readers confirms that it's been written.
It's resilient at that point across multiple availability zones within that region.
Now the cluster that RDS creates to support this architecture is different in some ways and similar in others versus Aurora.
In RDS multi-AZ mode each instance still has its own local storage which as you'll see elsewhere in this course is different than Aurora.
Like Aurora though you access the cluster using a few endpoint types.
First is the cluster endpoint and you can think of this like the database C name in the previous multi-AZ architecture.
It points at the writer and can be used for reads and writes against the database or administration functions.
Then there's a reader endpoint and this points at any available reader within the cluster.
And in some cases this does include the writer instance.
Remember the writer can also be used for reads.
In general operation though this reader endpoint will be pointing at the dedicated reader instances and this is how reads within the cluster scale.
So applications can use the reader endpoint to balance their read operations across readers within the cluster.
Finally there are instance endpoints and each instance in the cluster gets one of these.
Generally it's not recommended to use them directly as it means any operations won't be able to tolerate the failure of an instance because they don't switch over to anything if there's an instance failure.
So you generally only use these for testing and fault finding.
So this is the multi-AZ cluster architecture.
Before I finish up with this video I just want to cover a few key points about this specific type of multi-AZ implementation.
And don't worry you're going to get the chance to experience RDS practically in other videos in this part of the course.
So first RDS using multi-AZ in cluster mode means one writer and two reader DB instances in different availability zones.
So this gives you a higher level of availability versus instance mode because you have this additional reader instance versus the single standby instance in multi-AZ.
In instance mode.
In addition multi-AZ cluster mode runs on much faster hardware.
So this is Graviton architecture and uses local NVMe SSD storage.
So any writes are written first to local superfast storage and then flushed through to EBS.
So this gives you the benefit of the local superfast storage in addition to the availability and resilience benefits of EBS.
In addition when multi-AZ uses cluster mode then readers can be used to scale read operations against the database.
So if your application support it it means you can set read operations to use the reader endpoint which frees up capacity on the writer instance and allows your RDS implementation to scale to high levels of performance versus any other mode of RDS.
And again you'll see when you're watching the Aurora video, Aurora as a database platform can scale even more.
And I'll detail exactly how in that separate video.
Now when using multi-AZ in cluster mode replication is done using transaction logs and this is much more efficient.
This also allows a faster failover.
In this mode failover rather than taking 60 to 120 seconds can occur in as little as 35 seconds plus any time required to apply the transaction logs to the reader instances.
But in any case this will occur much faster than the 60 to 120 seconds which is needed when using multi-AZ instance mode.
And again just to confirm when running in this mode writes a viewed as committed when they've been sent to the writer instance and stored and replicated to at least one reader which has confirmed that it's written that data.
So as you can see these are completely different architectures and in my opinion multi-AZ in cluster mode adds some significant benefits over instance mode.
And you'll see how this functionality is extended again when I talk about Amazon Aurora.
But for now that's everything I wanted to cover in this video so thanks for watching.
Go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Okay, so the instance is now in an available state.
Let's just close down this informational dialogue at the top.
And let's just minimize this menu on the left.
Let's maximize the amount of screen space that we have for this specific purpose.
So I just want us to go inside this database instance and explore together the information that we have available.
So I talked in the theory lesson how every RDS instance is given an endpoint name and an endpoint port.
So this is the information that we'll use to connect to this RDS instance.
Networking wise, this instance has been provisioned in US-EAST-1A.
It's in the Animals for Life VPC and it's used our A4L subnet group that we created at the start of this demo.
And that means that it's currently utilizing all three database subnets in the Animals for Life VPC.
But it's chosen because we only deployed one instance to use US-EAST-1A.
Now this is the VPC security group that we're going to need to configure.
So right click on this and open it in a new tab and move to that tab.
This is the security group which controls access to this RDS instance.
So let's expand this at the bottom.
So currently it has my IP address being the only source allowed to connect into this RDS instance.
So the only inbound rule on the security group protecting this RDS instance is allowing my IP address.
So we're going to click on Edit and then click on Add Rule.
And we're going to add a rule which allows our other instances to connect to this RDS instance.
So first in the type drop down click and then type mySQL to get the same option as the line above and then click to select.
Next go ahead and type instance into the source box and then select the migrate to RDS-instance security group.
Now this is the security group that's used by any instances deployed by our one click deployment.
And this allows those instances to connect to our RDS instance and that's what we want.
So go ahead and select that and then click on Save Rules.
And this means now that our WordPress instance can communicate with RDS.
So now let's move back to the RDS tab and then make sure we're inside the A4L WordPress database instance.
So that's the connectivity and the security tab.
We also have the monitoring tab and it's here where you can see various CloudWatch provided metrics about the database instance.
You also have logs and events related to this instance.
So if we go and have a look at recent events we can see all of the events such as when the database instance was created, when its first backup was created.
And you can explore those because they might be different in your environment.
You can click on the Configuration tab and see the current configuration of the RDS instance.
The Maintenance and Backups tab is where you can configure the maintenance and backup processes and then of course you can tag the RDS instance.
Now in other lessons in this section of the course and depending on what course you're taking I will be talking about many of these options, what you can modify and which actions you can perform on RDS instances.
But for now we're just going to move on with this demo.
So the next step is that we need to migrate our existing data into this RDS instance.
So what we're going to do is to click on the Connectivity and Security tab and we're going to leave this open.
We're going to need this endpoint name and port very shortly.
You should still have a tab open to the EC2 console.
If you don't you can reach that by going on Services and then opening EC2 in a new tab.
But I want you to select the A4L-WordPress instance and then right click and connect to it using Instance Connect.
So go ahead and do that.
Once you've done that we're going to start referring to the lesson commands document.
So make sure you've got that open.
We're going to use this command to take a backup of the existing MariaDB database.
So we need to replace a placeholder.
What we need to do is delete this and replace it with the private IP address of the MariaDB EC2 instance.
So go back to the EC2 console, select the DB-WordPress instance and copy the private IP version 4 address into your clipboard.
And then let's move back to the WordPress instance and paste that in.
Go ahead and press Enter and it will prompt you for the password.
And the password is the same Animals for Life strong password that we've been using everywhere.
Copy that into your clipboard.
So this is the password for the A4L WordPress user on the MariaDB EC2 instance.
So paste that in and press Enter and then LS-LA to confirm that we now have this A4L WordPress.SQL database backup file.
And we do, so that's good.
So as we did in the previous demo lesson, we're going to take this backup file and we're going to import it into the new destination database, which is going to be the RDS instance.
To do that, we'll use this command, but we're going to need to replace the placeholder hostname with the CNAME of the RDS instance.
So go ahead and delete this placeholder, then go back to the RDS console and I'll want you to copy the endpoint name into your clipboard.
So select it, right click and then copy.
We won't need the port number because this is the standard MySQL port and if you don't specify it, most applications will assume this default.
So just make sure that you have the endpoint DNS name or endpoint CNAME in your clipboard.
And then back on the WordPress EC2 instance, go ahead and paste this database name into this command and press Enter.
And again, you'll be asked for the password and that's the same Animals for Life strong password.
So copy that into your clipboard, paste that in and press Enter.
And that's imported this A4LWordPress.SQL file into the RDS instance.
So now we need to follow the same process and change WordPress so that it points at the RDS instance.
And we do that by moving to where the WordPress configuration file is.
So cd space forward slash var forward slash ww w forward slash html and press Enter.
And then shudu.
So we have admin privileges, nano, which is a text editor and then wp-config.php and press Enter.
Then we need to scroll down and we're looking for where it says DB host and currently it has a host name here.
Now if you go back to the EC2 console and you look at the A4L-DB-WordPress instance, you'll see that its private IP version for DNS name is what's listed inside this configuration item.
So it's currently pointing at this dedicated database instance.
What we need to do is replace that and we're going to replace it with the RDS database DNS name or the CNAME of this RDS instance.
So copy that into your clipboard and then go ahead and delete this private DNS name for the MariaDB EC2 instance and then paste in the RDS endpoint name, also known as the RDS CNAME.
Once you've done that, control O and Enter to save and control X to exit.
And now our WordPress instance is pointing at the RDS instance for its database.
Now we can verify that by checking WordPress, move back to instances, select the WordPress instance, copy the public IP version for addressing to your clipboard.
Don't use this open address link.
Open that in a new tab.
Go ahead and just click on the best cats ever to verify the functionality and it does look as though it's working.
And to verify that, if we go back to the EC2 console, select the A4L-DB-WordPress instance and right click and then stop that instance.
Now the original database that was providing database services to WordPress is going to move into a stopped state.
And if our WordPress blog continues functioning, we know that it's using the RDS instance.
So let's keep refreshing and wait for this to change into a stopped state.
There we go.
It's stopped.
And if we go back to our WordPress page and refresh, it still loads.
And so we know that it's now using RDS for its database services.
So at this point, that's everything that I wanted you to do in this demo lesson.
You've stepped through the process of provisioning an RDS instance.
So you've created a subnet group, provisioned the instance itself, explored the functionality of the instance, including how to provide access to it by selecting a security group.
And then editing that security group to allow access.
You've performed a database migration and you've explored how the RDS instance is presented in the console.
So that's everything that you need to do within this demo lesson.
And don't worry, we're going to be exploring much more of the advanced functionality of RDS as we move through this section of the course.
For now, though, I want us to clear up the infrastructure that we've created as part of this demo lesson.
Now, because we've provisioned RDS manually outside of CloudFormation, unfortunately, there is a little bit more manual work involved in the cleanup.
So I want you to go to the RDS console, move to databases, select this database, click on actions, and then select delete.
Now it will prompt you to create a final snapshot and we're not going to do that.
We're not going to retain automated backups and so you'll need to acknowledge that upon instance deletion, automated backups including any system snapshots and pointing time recoveries will no longer be available.
And don't worry, I'll be talking about backups and recovery in another lesson in this section of the course.
For now, just acknowledge that and then type delete me into this box and confirm the deletion.
Now this deletion is going to take a few minutes.
It's not an immediate process.
It will start in a deleting state and we need to wait for this process to be completed before we continue the cleanup.
So go ahead and pause this video and wait for this instance to fully delete before continuing.
Now that the instance has been deleted, it vanishes from this list.
Next, we need to delete the subnet group that we created earlier.
So click on subnet groups, select the subnet group and then delete it.
You'll need to confirm that deletion.
Once done, it too should vanish from that list.
Next, go to the tab you've got open to the VPC console, scroll down and select security groups.
Now look through this list and locate the security group that you created as part of provisioning the RDS instance.
It should be called a4LVPC-RDS-SG.
Select that, click on actions and then delete security group and you'll need to confirm that process as well.
Once that's deleted, the final step is to go to the cloud formation console and then you'll need to delete the cloud formation stack that was created using the one-click deployment at the start of the demo.
It should be called migrate to RDS.
Select it, click on delete and confirm that deletion.
And once deleted, the account will be returned into the same state as it was at the start of the demo lesson.
So all of the infrastructure that we've used will be removed from the account and the account will be in the same state as at the start of the demo.
Now I hope you've enjoyed this demo and that we're repeating the same WordPress installation and then the creation of the blog post over and over again.
But I want you to get used to different parts of this process over and over again.
You need to know why not to use a database on EC2.
You need to know why not to perform a lot of these processes manually.
From this point onward in the course, we're going to be using RDS to evolve our WordPress design into something that is truly elastic.
And so all of these processes, the things I'm having you repeat are really useful to aid in your understanding of all of these different components.
So from this point onward, we're going to be automating the creation of RDS and focusing on the specific pieces of functionality that you need to understand.
But at this point, that's everything that you need to do in this demo.
So go ahead, complete the video and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to get some experience of how to provision an RDS instance and how to migrate a database from an existing self-managed MariaDB database instance through to RDS.
So over the next few demo lessons in this section of the course, you're going to be evolving your database architecture.
We're going to start with a single database instance, then we're going to add multi-AZ capability as well as talking about backups and restores.
But in this demo lesson specifically, we're going to focus on provisioning an RDS instance and migrating data into it.
Now in order to get started with this demo lesson, as always make sure that you're logged into the general AWS account, so the management account of the organization and you need to have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link that you'll need to use to provision this demo lesson's infrastructure.
So go ahead and click on that link now.
That's going to move you to a quick create stack screen.
The stack name should be pre-populated with migrate to RDS.
Scrolling down all of the parameter values will be pre-populated.
All you need to do is to click on the capabilities checkbox and then create stack.
There's also a lesson commands document linked to this lesson and I'd suggest you go ahead and open that in a new tab because you'll be referencing it as you move through this demo lesson.
Now you'll notice that this will look similar to the previous demo lesson's lesson commands document, but it has one small difference.
The initial command way of doing the backup of the source database, because that source database is going to be stored on a separate MariaDB database running on a separate EC2 instance, instead of taking the backup from the local instance, in this case it's connecting to a separate EC2 instance.
Otherwise, most of these commands are similar to the ones you used in the previous demo lesson.
Now you're going to need to wait for this stack to move into a create complete state before you continue the demo.
So go ahead and pause the video, wait for your stack to change to create complete and then you're good to continue.
Okay, so that cloud formation stack has now moved into a create complete state and it's created a familiar set of infrastructure.
Let's go ahead and click on the services drop down and then move to the EC2 console and just take a look.
So if we click on instances, you'll see that we have the same two instances as you saw in the previous demo lesson.
So we have A4L-WordPress, which is running the Apache web server and the WordPress application.
And then we have A4L-DB-WordPress and this is running the separate MariaDB database instance.
So what we need to do in order to perform this migration is first create the WordPress blog itself and the sample blog post.
And this is the same thing that we did in the previous demo.
So we should be able to go through this pretty quickly.
So go ahead and select the A4L-WordPress instance and copy its public IP version for address into your clipboard and then open that in a new tab.
And again, make sure not to use the open address because this uses HTTPS.
So copy the public IP version for address and then open that in a new tab.
Again, we're going to call the site the best cats.
We're going to use admin for the username.
And then for the password, let's go back to the CloudFormation tab.
Make sure you've got the migrate to RDS stack selected and then click on parameters.
We're going to use the same database password.
So copy that into your clipboard and replace the automatically generated one with the animals for live complex password.
And then enter test@test.com into the email box and click on install WordPress.
Once installed, click on login.
You'll need to use the admin username and the same password.
Click on login.
Then we're going to go to posts.
We're going to select the existing Hello World post.
Select trash this time.
Then click on add new.
Close down this dialog for title.
We're going to use the best cats ever.
Click on the plus.
Select gallery.
At this point, go ahead and click the link that's attached to this lesson to download the blog images.
Once downloaded, extract that zip file and you'll get four images.
Once you've got those images ready, click on upload, locate those images, select them and click on open.
Wait for them to load in.
Select publish and publish again.
And that saved the images onto the application instance and added the data for this post onto the separate MariaDB database.
So now we have this simple working blog.
Let's go ahead and look at how we can provision an RDS instance and how we can migrate the data into that RDS instance.
So move back to the AWS console.
Click on the services drop down and type RDS into the search box and open that in a new tab.
Now, as I've mentioned in the theory parts of this section, RDS is a managed database server as a service product from AWS.
It allows you to create database instances and those instances can contain databases that your applications can make use of.
Now to provision an RDS instance, the first thing that we need to do is to create a subnet group.
Now a subnet group is how we inform RDS which subnets within a VPC we want to use for our database instance.
So first we need to create a subnet group.
So select subnet groups on the menu on the left and then create a DB subnet group.
Now we're going to use a4lsn group, so animals for life subnet group for both the name and for the description.
And then select the VPC drop down and we're going to select the a4l-vpc1vpc.
So this is the animals for life VPC which has been created by the one click deployment that you used at the start of this demo.
Now once we've selected a name and a description and a VPC for this subnet group, then what we need to do is select the subnets that this database will be going into.
So we're going to select the database subnets in US East 1A, US East 1B and US East 1C.
So click on the availability zone drop down and pick those three availability zones.
So 1A, 1B and 1C.
Once we've selected the availability zones that this subnet group is going to use, next we pick the subnets.
So click on the drop down.
Now we want to pick the database subnets within the animals for life VPC and all we can see here are the IP address ranges.
So to help us with this click on the services drop down, type VPC and then open that in another new tab.
Once that loads, go ahead and click on subnets, sort the subnets by name and then locate sn-dba, dbb and dbc.
And just move your cursor across to the right hand side and note what the IP address ranges are for those different database subnets.
So 16, 80 and 144.
Go back to the RDS console, click on the subnets drop down and we need to pick each of those three subnets.
So 16, 80 and 144.
So these represent the database subnets in availability zone 1A, 1B and 1C.
And then once we've configured all of that information, we can go ahead and click on create to create this subnet group.
So this subnet group is something that we use when we're provisioning an RDS instance.
And as I mentioned moments ago, it's how RDS determines which subnets to place database instances into.
Now when we're only using a single database instance, then that decision is fairly easy.
But RDS deployments can scale up to use multiple replicas in multiple different availability zones.
You can have multi-AZ instances, read replicas.
Aurora has a cluster architecture which we'll talk about later in this section.
And so subnet groups are essential to inform RDS which subnets to place things into.
So now that we've configured that subnet group, let's go ahead and provision our RDS database instance.
So to do that, click on databases and then we're going to create a database.
So click on create database.
Now when you're creating a database, you have the option of using standard create where you have visibility of all of the different options and then easy create which applies some best practice configurations.
Now I want you to get the maximum experience possible, so we're going to use standard create.
Now when you're creating an RDS database instance, you have the ability to pick from many different engines.
So some of these are commercial like Oracle or Microsoft SQL Server.
And with some of these, you have the option of either paying for a license included with RDS or you can bring your own license.
For other database engines, there isn't a commercial price to pay for their usage and so they're much cheaper to use.
But you should select the engine type which is compatible with your application.
Now we're going to be talking about Amazon Aurora in dedicated lessons later in this section of the course.
Amazon Aurora is an AWS designed database product which has compatibility with MySQL and PostgreSQL.
For this demo lesson, we're going to use MySQL.
So go ahead and select MySQL and it's going to be using MySQL Community Edition.
So now let's just scroll down and step through some of the other options that we get to select when provisioning an RDS instance.
Now for all of these database engines, you have the ability to pick different versions of that engine.
And this is fairly critical because there are different major and minor versions that you can select from.
And different versions of these have different limitations.
So for example, we're going to be talking about snapshots later in this section.
And if you want to take a snapshot of an RDS database and then import that into an Aurora cluster, you need to pick a compatible version.
And then Aurora Serverless which we'll be talking about later on in this section has even more restrictions.
Now to keep things simple, I want you to ignore what version I pick in this video and instead look in this lesson's description and pick the version that I indicate in the lesson description because I'll keep this updated if AWS make any changes.
Now you can choose to use a template.
These templates give you access to only the options which are relevant for the type of deployment that you're trying to use.
So in production, you would pick the production template.
If you have any smaller or less critical dev or test workloads, then you could pick this template.
If you want to ensure that you can only select free tier options, then you should pick this template.
And that's what we're going to do in this demo because we want this demo to fall under the free tier.
So click on the free tier template.
I'll be talking about availability and durability later in this section because we've selected free tier only.
We don't have the ability to create a multi AZ RDS deployment.
And now we need to provide some configuration information about the database instance specifically.
So the first thing that we need to do is to provide a database instance identifier.
So this is the way that you can identify one particular instance from any other instances in the AWS account in the current region.
So this needs to be unique.
So we're going to use a four L WordPress for this database instance.
Then we need to pick a username which will be given admin privileges on this database instance.
And we're going to replace admin with a four L WordPress.
So we're going to use this for both the database identifier and the admin user of this database.
Now for the password for this admin user, we're going to move back to the cloud formation console and we're going to use this same animals for life complex password.
So copy that into your clipboard and paste it in for the password and the confirm password box.
And this just keeps things consistent between the self banished database and the RDS database.
Scroll down further still and it's here where you can select the database instance class to use.
Now because we've selected free tier only, we're limited as to what database size and type we can pick.
If we'd have selected production or dev test from the templates above, we would have access to a much wider range of database instance classes, both standard, memory optimized and burstable.
But because we've selected the free tier template, we're limited as to what we can select.
Now this might change depending on when you're watching this demonstration, but at the point I'm recording this video, it's db.t3.micro.
So don't be concerned if you see something different in this box.
Just make sure that you select the type of instance which falls under the free tier.
Then continue scrolling down and we need to pick the size of storage and the type of storage to use for this RDS instance.
Now whether you need to select this is dependent on what engine type you pick.
If you select Aurora, which we'll be talking about later on in this section, then you don't need to pre-allocate storage.
If you're using the MySQL version of RDS, then you do need to set a type of storage and a size of storage.
Now we're going to use the minimum which is 20GIB because our requirements for this database are relatively small.
And if we wanted to, if this was production, we could set storage autoscaling.
And this allows RDS to automatically increase the storage when a particular threshold is met.
But again, because this is a demo and it's only using a very small blog, we don't need storage autoscaling.
So go ahead and uncheck that option.
Now we need to select a VPC for this RDS instance to go into.
So click in the drop down and select the Animals for Life VPC.
So that's A4L-VPC1.
And then we need to pick a subnet group.
Now this is the thing that we've just created.
We only have one in this account, so there's nothing else to select.
But this is how we can advise RDS on which subnets to use inside the VPC.
Scroll down further still and we can specify whether we want this database to be available.
We want this database to be publicly accessible.
So this is whether we want instances and devices outside the VPC to be able to connect to this database.
This obviously comes with some security trade-offs.
And because we don't need that in this demonstration, because the only thing that we want to connect to this RDS instance is our WordPress instance, which is in the same VPC, then we can select Not to Use Public Access.
So make sure the No option is selected.
Now the way that you control access to RDS is you allocate a VPC security group to that instance.
So we could either choose an existing security group or we could create a new one.
So it's this security group which surrounds the network interfaces of the database and controls access to what can go into that database.
So we want to create a new VPC security group.
So we want to make that option.
We're going to call the security group A4LVPC-RDS-SG.
And we need to remember to update this so that our WordPress instance can communicate with our RDS instance.
And we'll do that in the next step.
If we wanted to pick a specific availability zone for this instance to go into, then we could select one here or we can leave it up to RDS to pick the most suitable.
So we can select No Preference.
Continue scrolling down.
We won't change the Database Authentication option because we want to allow password authentication.
Continue scrolling down and we're going to expand Additional Configuration.
By default, an RDS instance is created with no database on that instance.
In this case, because we're migrating an existing WordPress database into RDS, we're going to go ahead and create an initial database.
And to keep things easy and consistent, we're going to use the same name, so A4L WordPress.
Now you can enable automatic backups for RDS instances.
And I'll be talking about these in a separate theory lesson.
If you do select automatic backups, then you can also pick a backup retention period as well as a backup window.
So we've got Advanced Monitoring, various log exports.
We don't need to use any of those.
You can also set the Maintenance window for an RDS instance.
So when Maintenance will be performed, you can enable Deletion Protection if you want.
If this is a production database, we don't need to do that.
What we're going to do is scroll all the way down to the bottom and then click on Create Database.
Now this process can take some time.
I've seen it take anywhere from five to 45 minutes.
And we're going to need this to be finished before we move on to the next step.
So this seems like a great time to end this video.
It gives you the opportunity to grab a coffee or stretch your legs.
Wait for this database creation to finish.
And then when you're ready, I'll look forward to you joining me in part two of this video.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video which is the first of this series I'm going to step through the architecture of the relational database service known as RDS.
Now this video will focus on the architecture of the product with upcoming videos going into specific features in more depth.
Now we do have a lot to cover so let's jump in and get started.
Now I've heard many people refer to RDS as a database as a service or DB AAS product.
Now details are important and you need to understand why this is not the case.
A database as a service product is where you pay money and in return you get a database.
This isn't what RDS does.
With RDS you pay for and receive a database server so it would be more accurate to call it a database server as a service product.
Now this matters because it means that on this database server or instance which RDS provides you can have multiple databases.
RDS provides a managed version of a database server that you might have on-premises only with RDS you don't have to manage the hardware, the operating system or the installation as well as much of the maintenance of the DB engine and RDS of course runs within AWS.
Now with RDS you have a range of database engines to use including MySQL, Maria DB, PostgreSQL and then commercial databases such as Oracle and Microsoft SQL.
Some of these are open source and some are commercial and so there will be licensing implications and if appropriate for the exam that you're working towards there will be a separate video on this topic.
Now there's one specific term that I want you to disassociate from RDS and that's Amazon Aurora.
You might see Amazon Aurora discussed commonly along with RDS but this is actually a different product.
Amazon Aurora is a custom database engine and product created by AWS which has compatibility with some of the above engines but it was designed entirely by AWS.
Many of the features I'll step through while talking about RDS are different for Aurora and most of these are improvements so in your mind separate Aurora from RDS.
So in summary RDS is a managed database server as a service product.
It provides you with a database instance so a database server which is largely managed by AWS.
Now you don't have access to the operating system or SSH access.
Now I have a little asterisk here because there is a variant of RDS called RDS custom where you do have some more low level access but I'll be covering that in a different video if required.
In general when you think about RDS think no SSH access and no operating system access.
Now what I think might help you at this point is to look at a typical RDS architecture visually and then over the remaining videos in this series I'll go into more depth on certain elements of the product.
So RDS is a service which runs within a VPC so it's not a public service like S3 or DynamoDB.
It needs to operate in subnets within a VPC in a specific AWS region and for this example let's use US East 1 and to illustrate some cross region bits of this architecture our second region will be AP Southeast 2 and then we're going to have within US East 1 a VPC and let's use three availability zones here A, B and C.
Now the first component of RDS which I want to introduce is an RDS subnet group.
This is something that you create and you can think of this as a list of subnets which RDS can use for a given database instance or instances.
So in this case let's say that we create one which uses all three of the availability zones.
In reality this means adding any subnets in those three availability zones which you want RDS to use and in this example I'm going to actually create another one.
We going to have two database subnet groups and you'll see why in a second.
In the top database subnet group let's say I add two public subnets and in the bottom database subnet group let's say three private subnets.
So when launching an RDS instance whether you pick to have it highly available or not and I'll talk about how this works in an upcoming video you need to pick a DB subnet group to use.
So let's say that I picked the bottom database subnet group and launched an RDS instance and I chose to pick one with high availability.
So it would pick one subnet for the primary instance and another for the standby.
It picks at random unless you indicate a specific preference but it will put the primary and standby within different availability zones.
Now because these database instances are within private subnets it means that they would be accessible from inside the VPC or from any connected networks such as on-premises networks connected using VPNs or Direct Connect or any other VPCs which appeared with this one and I'll cover all of those topics elsewhere in the course if I haven't already done so.
Now I could also launch another set of RDS instances using the top database subnet group and the same process would be followed assuming that I picked to use multi AZ.
RDS would pick two different subnets in two different availability zones to use.
Now because these are public subnets we could also if we really wanted to elect to make these instances accessible from the public internet by giving them public addressing and this is something which is really frowned upon from a security perspective but it's something that you need to know is an option when deploying RDS instances into public subnets.
Now you can use a single DB subnet group for multiple instances but then you're limited to using the same defined subnets.
If you want to split databases between different sets of subnets as with this example then you need multiple DB subnet groups and generally as a best practice I like to have one DB subnet group for one RDS deployment.
I find it gives me the best overall flexibility.
Okay so another few important aspects of RDS which I want to cover.
First RDS instances can have multiple databases on them.
Second every RDS instance has its own dedicated storage provided by EBS so if you have a multi AZ pair primary and standby each has their own dedicated storage.
Now this is different than how Amazon Aurora handles storage so try to remember this architecture for RDS each instance has its own dedicated EBS provided storage.
Now if you choose to use multi AZ as in this architecture then the primary instances replicate to the standby using synchronous replication.
Now this means that the data is replicated to the standby as soon as it's received by the primary.
It means the standby will have the same set of data as the primary so the same databases and the same data within those databases.
Now you can also decide to have read replicas.
I'll be covering what these are and how they work in another dedicated video but in summary read replicas use asynchronous replication and they can be in the same region but also other AWS regions.
These can be used to scale read load or to add layers of resilience if you ever need to recover in a different AWS region.
Now lastly we also have backups of RDS.
There is a dedicated video covering backups later on in this section of the course but just know that backups occur to S3.
It's to an AWS managed S3 bucket so you don't see the bucket within your account but it does mean that data is replicated across multiple availability zones in that region.
So if you have an AZ failure backups will ensure that your data is safe.
If you use multi AZ mode then backups occur from the standby instance which means no negative performance impact.
Now this is the basic product architecture.
I'll be expanding on all of these key areas in dedicated videos as well as giving you the chance to get practical experience via some demos and mini projects if appropriate.
For now let's cover one final thing before we finish this video and that's the cost architecture of RDS.
So before I finish the video I want to talk about RDS costs because it's a database server as a service product you're not really build based on your usage.
Instead like EC2 which RDS is loosely based on you'll build for resource allocation and there are a few different components to RDS's cost architecture.
First you've got the instance size and type.
Logically the bigger and more feature rich the instance the greater the cost and this follows a similar model to how EC2 is built.
The fee that you see is an hourly rate but it's billed per second.
Next we have the choice of whether multi AZ is used or not because multi AZ means more than one instance there's going to be additional cost.
Now how much more cost depends on the multi AZ architecture which I'll be covering in detail in another video.
Next is a per gig monthly fee for storage which means the more storage you use the higher the cost and certain types of storage such as provisioned IOPS cost more and again this is aligned to how EBS works because the storage is based on EBS.
Next is the data transfer costs and this is a cost per gig of data transfer in and out of your DB instance from or to the internet and other AWS regions.
Next we have backups and snapshots so you get the amount of storage that you pay for for the database instance in snapshot storage for free.
So if you have 2 TB of storage then that means 2 TB of snapshots for free.
Beyond that there is a cost and this cost is gig per month of storage so the more data is stored the more it costs the longer it's stored the more it costs.
One TB for one month is the same cost as 500 GB for two months so it's a per GB month cost and then finally we have any extra costs based on using commercial DB engine types and again I'll be covering this if appropriate in a dedicated video elsewhere in the course.
Okay so at this point that is everything I wanted to cover in this video as I mentioned at the start this is just an introduction to RDS architecture.
We're going to be going into more detail on specific key points in upcoming videos but for now that's everything I wanted to cover.
So go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this demo lesson where you're going to migrate from the monolithic architecture on the left of your screen towards a tiered architecture on the right.
Essentially you're going to split the WordPress application architecture, you're going to move the database from being on the same server as the application to being on a different server and this will form step one of moving this architecture from being a monolith through to being a fully elastic architecture.
Now this is the first stage of many but it is a necessary one.
Now in order to perform this demonstration you're going to need some infrastructure.
Before we apply the infrastructure just make sure that you're logged in to the general AWS account, so the management account of the organization and as always you need to have the Northern Virginia region selected.
Now once you've got both of those set there's a one-click deployment link attached to this lesson so go ahead and click on that link.
What this is going to do is deploy the Animals for Life base infrastructure, it's going to deploy the monolithic WordPress application instance and it's also going to deploy a separate MariaDB database instance that you're going to use as part of the migration.
Now everything set, the stack name should be set to a suitable default, all you need to do is to scroll all the way down to the bottom, check this capabilities box and click on create stack.
Now also attached to this lesson is a lesson commands document which contains all the commands you'll be using throughout this demo.
So go ahead and open that in a new tab, you'll be referencing it constantly as you're making the adjustments to the WordPress architecture.
Now we're going to need this CloudFormation stack to be fully complete before we can continue so go ahead and pause the video and resume once the CloudFormation stack moves into a create complete state.
So now the stacks moved into a create complete state, we're good to continue.
Now this has created the base Animals for Life infrastructure which includes a number of EC2 instances so let's take a look at those, let's click on services and then locate and open EC2 in a brand new tab.
Once you're at the EC2 console if you do see any dialogues around user interface updates then just go ahead and close those down and then click on instances running.
Once you're here you'll see two EC2 instances, one will be called A4L-WordPress and this is the monolith so this is the EC2 instance which contains the WordPress application and the built-in database.
So this is the WordPress installation that we're going to migrate from and then this instance A4L-DB-WordPress this contains a standalone MariaDB installation so we're going to migrate the database for WordPress from this instance onto the DB instance and this will create a tiered application architecture rather than the monolith which we currently have.
So step number one is to perform the WordPress installation so to do that I want you to go ahead and copy the public IP version for address of the WordPress EC2 instance into your clipboard and then open it in a new tab.
Now be careful not to use the open address link that will use HTTPS which we're not currently using so copy the IP address into your clipboard and open that in a new tab.
Now when you do that you'll see a familiar WordPress installation dialog we're going to create a simple blog for site title go ahead and call it the best cats for username pick admin and then for the password instead of using the randomly selected one go ahead and use this same complex password that we've used for the CloudFormation template so this is animals for life but with number substitution.
So if you go back to your CloudFormation tab and go to the parameters tab this is the same password that we use for the DB password and the DB root password.
Now of course in production this is incredibly bad practice we're just doing it in this demo to keep things simple and avoid any mistakes.
So back to the WordPress installation screen site title the best cats username admin this for the password and then just go ahead and type a fake email so I don't want to use my real email for this I'm going to type test at test.com you can do the same and then go ahead and click on install WordPress so this is installed the WordPress application and it's using the Maria DB server that's on the same EC2 instance so part of the same monolith.
So we're going to log in we'll need to type admin and then use the animals for life strong password and click on login and once we logged in we're going to create a simple blog post so click on posts we're going to select the existing hello world post select trash this time then click on add new then we're going to add a new post we can close down this introduction dialogue and for title go ahead and type the best cats ever and then some exclamation points next click on this plus sign and we're going to add a gallery now at this point you're going to need some images to upload to this blog post I've attached an images link to this lesson so if you go ahead and click that link it will download a zip file if you extract that zip file it's going to contain four image files all four of my cats so at this point once you've downloaded and extracted that file go ahead and click on upload locate those images there should be four select them all and click on open that will add these images to this blog post and once you've added them all you can go ahead and click on publish and then publish again and this will publish this blog post so it will add data to the database that's running on the monolithic application instance as well as store these images on the local instance file system now making a point of mentioning that these images are stored on the file system because as you'll see later in the course this is one of the things that we need to migrate when we're moving to a fully elastic architecture we can't have images stored on the instances themselves we need to move that to a shared file system for now though we're focusing on the database so at this point we have the working blog the images for this blog are stored on the local file system of a4l-wordpress and the data for that blog post is stored on the MariaDB database that's also running on this EC2 instance so the next step of this demo lesson is that you're going to migrate the data from a4l-wordpress onto a4l-db-wordpress and this is an isolated MariaDB instance this is dedicated for the database so to do this migration select a4l-wordpress right click we're going to connect to this instance we'll be using EC2 instance connect so just make sure that the username is set to EC2-user and then click on connect now this is where you're going to be using the commands that are stored within the lesson commands document so you need to make sure that you have this ready to reference because it's far easier to copy and paste these commands and then adjust any placeholders rather than type them out manually because that's prone to errors the first step is to get the data from the database that's running on this monolithic application instance and store it in a file on disk so that's the first thing we need to do we need to do a backup of the database into a .sql file now to do that we use this command so it's a utility called my sql-dump it uses the -u to specify the user that we're going to be using to connect to the database then we use -p to specify that we want to provide a password and we could either provide the password on the command line or we could have it prompt us now if we supply the password with no spaces next to this -p then it will accept it as input on this command if we don't specify anything so there's a space here then it's going to ask us for the password the next thing we specify is the database name that we want to do the dump of in this case it's a4l WordPress which is the database for the animals for life WordPress instance now if we just run this command on its own it would output the dump so all of the data in the database to standard output which in this case is our screen we don't want it to do that we want it to store the results in a file called a4l WordPress dot sql and so we use this symbol which means that it's going to take the output of this component of the command and it's going to redirect it into this file so let's go ahead and run this command and it's going to prompt us for the password for this database now to get that go back to cloud for information make sure parameters are selected and it's this password that we need which is the DB password so copy that into your clipboard go back to the instance paste that in press enter and that will output all the data in the database to this file now you won't see any indication of success or failure but if you do an LS space -la and press enter one of the files that you'll see is a4l WordPress dot sql so now we have a copy of the WordPress database containing our blog post the next thing that we need to do is to take this file this backup of the database and inject it into the new database that we want to use so the dedicated Maria DB EC2 instance and to do that we're going to use this command so this command has two components the first component is this which connects to the Maria DB database instance the second component is this which takes the backup that we've just made and it feeds it into this command so this backup contains all the necessary definitions to create a new database and inject the data required this component of the command just allows us to connect to this new dedicated Maria DB instance now there are some place holders that we need to change the database name that we're going to use is the same so a4l WordPress we're still going to want to be prompted for a password so -p is what we use this time though we're going to connect using a user called a4l WordPress so we're not using the root user we're going to connect to this separate Maria DB database instance using a user a4l WordPress the only thing that we need to change is that we need to connect to a non-local host so when we used the mysql dump command we didn't specify a host to connect to and this defaulted to local host so the current machine in the case of this command we're operating with a separate server this dedicated EC2 instance which is running the Maria DB database server so a4l -db -wordpress we need to connect to this so what we'll need to connect to this is the private IP version for address of this separate database instance so select it look for private IP version for addresses and then click on the icon next to this to copy the private IP version for address of this separate database server into your clipboard then return to the application instance and we need to replace the placeholder here with that value so make sure that you're one space after the end of this placeholder and just delete this leave a space between -h and where the cursor is and then paste in that IP address so this is going to connect to this separate EC2 instance using its private IP it's going to use the a4l WordPress user it will prompt us for a password it will perform the operation on the a4l WordPress database and it's going to use the contents of this backup file to perform those tasks so go ahead and press enter and you'll be prompted for a password now again it's the same password this has all been set up as part of the cloud formation one-click deployment this lesson is about the migration process not setting up a database server so I've automated this component of the infrastructure so copy the DB password into your clipboard go back to the instance paste it in and press enter so now we've uploaded our WordPress application database into this separate MariaDB database server the next step is to configure WordPress to point at this new database server so to do that cd space forward slash var forward slash www forward slash html and press enter and then we're going to run a shudu space nano which is a text editor space wp-config.php and this is the WordPress configuration file so press enter now what we're looking for if we scroll down is we're looking for the line which says define and then a space and then DB host so this is the database host that WordPress attempts to connect to and currently it's set to local host which means it will use the database on the same EC2 instance as the application we're going to delete this local host so delete until we have two single quotes and then make sure that you still have the private IP version for address of this separate database instance in your clipboard if you don't just go ahead and copy it again from the EC2 console and then paste that in place of local host so now you should see DB underscore host and this now represents this private IP address and now the private IP address that you should use here will be different you need to use your private IP address of your a4l - DB - WordPress EC2 instance so now that you've updated this configuration file press control o and enter to save and then control x to exit out of editing this file now this now means that the WordPress instance is going to be communicating with the separate MariaDB database instance let's verify that let's go back to the tab that we have to our WordPress application and let's just go ahead and do a refresh if everything's working as expected we should see that the blog reloads successfully now this means that this blog is now pointing at this separate MariaDB database instance to be doubly sure of this though let's go back to the WordPress instance and let's shut down the MariaDB database server and we do that using this command so shudu space service space MariaDB space and then stop so type or copy and paste that command in and press enter and that's going to stop the MariaDB database service which is running on a4l WordPress so now the only MariaDB database that we have running is on the a4l - DB - WordPress EC2 instance now we can go back to the WordPress tab and hit refresh and assuming it loads in as it does in my case this now confirms that WordPress is communicating with this dedicated MariaDB EC2 instance now the reason why I wanted to step you through all these tasks in this demo lesson is the time a firm believer that in order to understand best practice architecture you need to understand bad architecture and as I mentioned in the theory lesson there is almost no justification for running your own self-managed database server on an EC2 instance in almost all situations it's preferable to use the RDS service but I need you to understand exactly how the architecture works when you're self managing a database and how to migrate from a monolithic all-in-one architecture through to having a separate self managed database in the demo lesson that's coming up next in the course you're going to migrate from this through to an RDS instance so that's step two but at this point you've done everything that I wanted you to do in this demo lesson you've implemented the architecture that's on screen now on the right all we need to do is to tidy up all of the infrastructure that we've used within this lesson so to do that it's nice and easy just go back to the cloud formation console make sure that you have the monolith to EC2 DB stack selected click on the delete button and then confirm that deletion and that stack deleting will clean up all of the infrastructure that we've used throughout this demo lesson and it will return the account into the same state as it was at the start of the lesson at this point you've completed all of the tasks that I want you to do so I hope you've enjoyed this demo lesson go ahead and complete this video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover something which can be argued is bad practice to do inside AWS and that's running databases directly on EC2.
As you'll find out in this section of the course there are lots of AWS products which provide database services so running any database on EC2 at best requires some justification.
In this lesson I want to step through why you might want to directly run databases on EC2 and why it's also a bad idea.
It's actually always a bad idea to run databases on EC2.
The argument really is whether the benefits to you or your business outweigh the fact that it is a bad idea.
So let's jump in and take a look at some of the reasons why you should and shouldn't run databases on EC2.
Generally when people think about running databases on EC2 they picture one of two things.
First, a single instance and on this instance you're going to be running a database platform, an application of some kind and perhaps a web server such as Apache.
Or you might picture a simple split architecture where the database is separated from the web server and application.
So you'll have two instances, probably smaller instances than the single large one.
And architecturally I hope this makes sense so far.
So far in the course with the Animals for Life WordPress application stack example we've used the architecture on the left.
A single EC2 instance with all of the application tiers or components on one single instance.
Crucially one single instance running within a single availability zone.
Now if you have a split architecture like on the right you can either have both EC2 instances inside the same availability zone or you could split the instances across two.
So AZA and AZB.
Now when you change the architecture in this way, when you split up the components into separate instances, whether you decide to put those both in the same availability zone or split them, you need to understand that you've introduced a dependency into the architecture.
The dependency that you've introduced is that there needs to be reliable communication between the instance running the application and the database instance.
If not the application won't work.
And if you do decide to split these instances across multiple availability zones then you should also be aware that there is a cost for data when it's transiting between different availability zones in the same region.
It's small but it does exist.
Now that's in contrast to where communications between instances using private IPs in the same availability zone is free.
So that's a lot to think about from an architectural perspective.
But that's what we mean when we talk about running databases on EC2.
This is the architecture.
Generally one or more EC2 instances with at least one of them running the database platform.
Now there are some reasons why you might want to run databases on EC2 in your own environment.
You might need access to the operating system of the database and the only way that you can have this level of access is to run on EC2 because other adbs products don't give you OS level access.
This is one of those things though that you should really question if a client requests it because there aren't many situations where OS level access is really a requirement.
Do they need it?
Do they want it?
Or do they only think that they want it?
So if you have a client or if your business states that they do need OS level access the first thing that you should do is question that statement.
Now there are some database tuning things which can only be done with root level access and because you don't have this level of access with managed database products then these values or these configuration options won't be tuneable.
But in many cases and you'll see this later on in this section AWS does allow you to control a lot of these parameters that historically you would need root access for without having root access.
So again this is one of those situations where you need to question any situation where it's presented to you that you need database root access.
It's worth noting often that it's an application vendor demanding this level of access not the business themselves.
But again it's often the case that you need to delve into the justifications.
This level of access is often not required and a lot of software vendors now explicitly support AWS's managed database products.
So again verify any suggestion of this level of access.
Now something that is often justified is that you might need to run a database or a database version which AWS don't provide.
This is certainly possible and more so with emerging types of databases or databases with really niche use cases.
You might actually need to implement an application with a particular database that is not supported by AWS and any of its managed database products.
And in that case the only way of servicing that demand is to install that database on EC2.
So that's one often justified reason for running databases on EC2.
Or it might be that a particular project that you're working on has really, really detailed and specific requirements and you need a very specific version of an OS and a very specific version of a DB in combination which AWS don't provide.
Or you might need or want to implement an architecture which AWS also don't provide.
Certain types of replication done in certain ways or at certain times.
Or it could be something as simple as the decision makers in your organization just want a database running on EC2.
You could argue that they're being unreasonable to just demand a database running on EC2 but in many cases you might not have a choice.
So it can always be done.
You can run databases on EC2 as long as you're willing to accept the negatives.
So these are all valid.
Some of them I would question or fight or ask for justification but situations certainly do exist which require you to use databases on EC2.
And I'm stressing these because I've seen tricky exam questions where the right answer is to use a database on EC2.
So I want to make sure that you've got fresh in your mind some of the styles of situations where you might actually want to run a database on EC2.
But now let's talk about why you really shouldn't put a database product on EC2.
Even with the previous screen in mind, even with all of those justifications, you need to be aware of the negatives.
And the first one is the admin overhead.
The admin overhead of managing the EC2 instance as well as the database host, the database server.
Both of these require significant management effort.
Don't underestimate the effort required to keep an EC2 instance patched or keep a database host running at a certain compatible level with your application.
You might not be able to upgrade or you might have to upgrade and keep the database version running in a very narrow range in order to be compatible with the application.
And whenever you perform upgrades or whenever you're fault finding, you need to do it out of core usage hours, which could mean additional time, stress and cost for staff to maintain both of these components.
Also don't forget about backups and disaster recovery management.
So if your business has any disaster recovery planning, running databases on EC2 adds a lot of additional complexity.
And in this area, when you're thinking about backups and DR, many of AWS's managed database products we'll talk about throughout this section include a lot of automation to remove a lot of this admin overhead.
Perhaps one of the most serious limitations though is that you have to keep in mind that EC2 is running in a single availability zone.
So if you're running on an EC2 instance, keep in mind you're running on an EBS volume in an EC2 instance.
Both of those are within a single availability zone.
If that zone fails, access to the database could fail and you need to worry about taking EBS snapshots or taking backups of the database inside the database server and putting those on storage somewhere, maybe S3.
Again, it's all admin overhead and risk that your business needs to be aware of.
Another issue is features.
Some of AWS's database products genuinely are amazing.
A lot of time and effort and money have been put in on your behalf by AWS to make these products actually better than what you can achieve by installing database software on EC2.
So by limiting yourself to running databases on EC2, you're actually missing out on some of the advanced features and we'll be talking about all of those throughout this section of the course.
Another aspect is that EC2 is on or off.
EC2 does not have any concept of serverless because explicitly it is a server.
You're not going to be able to scale down easily or keep up with bursty style demand.
There are some AWS managed database products we'll talk about in this section which can scale up or down rapidly based on load.
And by running a database product on EC2, you do limit your ability to scale and you do set a base minimum cost of whatever the hourly rate is for that particular size of EC2 instance.
So keep that in mind.
So again, if you're being asked to implement this by your business, you should definitely fight this fight and get the business to justify why they want the database product on EC2 because they're missing out on some features and they're committing themselves to costs that they might not need to.
There's also replication.
So if you've got an application that does need replication, there are the skills to set this up, the setup time, the monitoring and checking for its effectiveness and all of this tends to be handled by a lot of AWS's managed database products.
So again, there's a lot of additional admin overhead that you need to keep in mind.
And lastly, we've got performance.
This relates in some way to when I talked about features moments ago.
AWS do invest a considerable amount of time into optimization of their database products and implementing performance based features.
And if you simply take an off the shelf database product and implement it on EC2, you're not going to be able to take advantage of these advanced performance features.
So keep that in mind.
If you do run database software directly on EC2, you're limiting the performance that you can achieve.
But with that out of the way, that's all of the theory and logic that I wanted to cover in this lesson.
So now you have an idea about why you should and why you shouldn't run your own database on an EC2 instance.
In the next lesson, which is a demo, we're going to take the single instance WordPress deployment that we've been using so far in the course, and we're going to evolve it into two separate EC2 instances.
One of these is going to be running Apache and WordPress.
So it's going to be the application server.
And the other is going to be running a database server MariaDB.
Now this kind of evolution is best practice, at least as much as it can ever be best practice to run a self-managed database platform.
Now the reason we're doing this is we want to split up our single monolithic application stack.
We want to get it to the point so the database is not running on the same instance as the application itself.
Because once we've done that, we can move that database into one of AWS's managed database products later in this section.
And that will allow us to take advantage of these features and performance that these products deliver.
It's never a good idea to have a single monolithic application stack when you can avoid it.
So the way that we're running WordPress at the moment is not best practice for an enterprise application.
So by splitting up the application from the database, as we go through the course, it will allow us to scale each of these independently and take advantage independently of different AWS products and services, which can help us improve each component of our application.
So with that being said, go ahead and finish up this video.
And then when you're ready, you can join me in the next lesson, which is going to be a demo where we're going to split up this monolithic WordPress architecture into two separate compute instances.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this lesson where I want to provide a really quick theoretical introduction to Acid and Base, which are two database transaction models that you might encounter in the exam and in the real world.
Now this might seem a little abstract, but it does feature on the exam, and I promise in real world usage knowing this is a database superpower.
So let's jump in and get started.
Acid and Base are both acronyms and I'll explain what they stand for in a moment.
But they are both database transaction models.
They define a few things about transactions to and from a database, and this governs how the database system itself is architected.
At a real foundational level, there's a computer science theorem called the CAP theorem, and it stands for consistency, availability, and partition tolerance.
Now let's explore each of these quickly because they really matter.
Consistency means that every read to a database will receive the most recent write or it will get an error.
On the other hand, availability means that every request will receive a non-error response, but without the guarantee that it contains the most recent write, and that's important.
Partition tolerance means that the system can be made of multiple network partitions, and the system continues to operate even if there are a number of dropped messages or errors between these network nodes.
Now the CAP theorem states that any database product is only capable of delivering a maximum of two of these different factors.
One reason for this is that if you imagine that you have a database with many different nodes, all of these are on a network.
Imagine if communication fails between some of the nodes or if any of the nodes fail.
Well you have two choices if somebody reads from that database.
You can cancel the operation and thus decrease the availability but ensure the consistency, or you can proceed with the operation and improve the availability but risk the consistency.
So as I just mentioned, it's widely regarded as impossible to deliver a database platform which provides more than two of these three different elements.
So if you have a database system which has multiple nodes and if a network is involved, then you generally have a choice to provide either consistency or availability, and the transaction models of ACID and BASE choose different trade-offs.
ACID focuses on consistency and BASE focuses on availability.
Now there is some nuance here and some additional detail but this is a high-level introduction.
I'm only covering what's essential to know for the exam.
So let's quickly step through the trade-offs which each of these makes and we're going to start off with ACID.
ACID means that transactions are atomic, transactions are also consistent, transactions are also isolated, and then finally, transactions are durable.
And let's get the exam power-up out of the way.
Generally if you see ACID mentioned, then it's probably referring to any of the RDS databases.
These are generally ACID-based and ACID limits the ability of a database to scale and I want to step through some of the reasons why.
Now I'm going to keep this high-level but I've included some links attached to this lesson if you want to read about this in additional detail.
In this lesson though I'm going to keep it to what is absolutely critical for the exam.
So let's step through each of these individually.
Atomic means that for a transaction either all parts of a transaction are successful or none of the parts of a transaction are successful.
Consider if you run a bank and you want to transfer $10 from account A to account B.
That transaction will have two parts.
Part one will remove $10 from account A and part two will add $10 to account B.
Now you don't want a situation where the first part or the second part of that transaction can succeed on its own and the other part can fail.
Either both parts of a transaction should be successful or no parts of the transaction should be applied and that's what atomic means.
Now consistent means that transactions applied to the database move the database from one valid state to another.
Nothing in between is allowed.
In databases such as relational databases there may well be links between tables where an item in one table must have a corresponding item in another where values might need to be in certain ranges and this element just means that all transactions need to move the database from one valid state to another as per the rules of that database.
Isolated means that because transactions to a database are often executed in parallel they need not to interfere with each other.
Isolation ensures that concurrent executions of transactions leave the database in the same state that would have been obtained if transactions were executed sequentially.
So this is essential for a database to be able to run lots of different transactions at the same time maybe from different applications or different users.
Each of them need to execute in full as they would do if they were the only transaction running on that database.
They need not to interfere with each other and then finally we have durable which means that once a transaction has been committed it will remain committed even in the case of a system failure.
Once the database tells the application that the transaction is complete and committed once it's succeeded that data restored somewhere that system failure or power failure or the restart of a database server or node won't impact the data.
Now most relational database platforms use acid-based transactions it's why financial institutions generally use them because it implements a very rigid form of managing data and transactions on that data but because of these rigid rules it does limit scalability.
Now next we have base and base stands for basically available it also stands for soft state and then lastly it stands for eventually consistent and again this is super high level and I've included some links attached to this lesson with more information.
Now it's also going to sound like I'm making fun of this transaction model because some of these things seem fairly odd but just stick with me and I'll explain all of the different components.
Basically available means that read and write operations are available as much as possible but without any consistency guarantees.
So reads and writes are kinder or maybe.
Essentially rather than enforcing immediate consistency base modeled no-sequal databases will ensure availability of data by spreading and replicating that data across all of the different nodes of that database.
There isn't really an aim within the database to guarantee anything to do with consistency it does its best to be consistent but there's no guarantee.
Now soft state is another one which is a tiny bit laughable in a way it means that base breaks off with the concept of a database which enforces its own consistency instead it delegates that responsibility to developers.
Your application needs to be aware of consistency and state and work around the database if you need immediate consistency so if you need a read operation to always have access to all of the writes which occurred before it immediately and if the database optionally allows it then your application needs to specifically ask for it otherwise your application has to tolerate the fact that what it reads might not be what another instance of that application has previously written.
So with soft state databases your application needs to deal with the possibility that the data that you're reading isn't the same data that was written moments ago.
Now all of these are fairly fuzzy and do overlap but lastly we have the fact that base does not enforce immediate consistency it means that it might happen eventually if we wait long enough then what we read will match what has been previously written eventually.
Now this is important to understand because generally by default a base transaction model means that any reads to a database are eventually consistent so applications do need to tolerate the fact that reads might not always have the data for previous writes.
Many databases are capable of providing both eventually consistent and immediately consistent reads but again the application has to have an awareness of this and explicitly ask the database for consistent reads.
Now it sounds like base transactions are pretty bad right?
Well not really databases which use base are actually highly scalable and can deliver really high performance because they don't have to worry about all the pesky annoying things like consistency within the database they offload that to the applications.
Now DynamoDB within AWS is an example of a database which normally works in a base like way it offers both eventually and immediately consistent reads but your application has to be aware of that.
Now DynamoDB also offers some additional features which offer acid functionality such as DynamoDB transactions so that's something else to keep in mind.
Now for the exam specifically I have a number of useful defaults.
If you see the term base mentioned then you can safely assume that it means a NoSQL style database.
If you see the term acid mentioned then you can safely assume as a default that it means an RDS database but if you see NoSQL or DynamoDB mentioned together with acid then it might be referring to DynamoDB transactions and that's something to keep in mind.
Now that's everything I wanted to cover in this high-level lesson about the different transaction models.
This topic is relatively theoretical and pretty deep and there's a lot of extra reading but I just wanted to cover the essentials of what you need for the exam so I've covered all of those facts in this lesson and at this point it is the end of the lesson so thanks for watching go ahead and complete the video and then when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Now, there are other types of database platforms, no SQL platforms, and this doesn't represent one single way of doing things.
So I want to quickly step through some of the common examples of no SQL databases or non-relational databases.
The first type of database in the no SQL category that I want to introduce is key value databases.
The title gives away the structure.
Key value databases consist of sets of keys and values.
There's generally no concept of structure.
It's just a list of keys and value pairs.
In this case, it's a key value database for one of the animals for life rescue centers.
It stores the date and time and a sensor reading from a feeding sensor, recording the number of cookies removed from the feeder during the previous 60 minutes.
So essentially, the key on the left stores the date and time, and on the right is the number of cookies eaten as detected by the sensor during the previous 60 minutes.
So that's it for this type of database.
It's nothing more complex than that.
It's just a list of key value pairs.
As long as every single key is unique, then the value doesn't matter.
It has no real schema nor does it have any real structure because there are no tables or table relationships.
Some key value databases allow you to create separate lists of keys and values and present them as tables, but they're only really used to divide data.
There are no links between them.
This makes key value databases really scalable because sections of this data could be split onto different servers.
In general, key value databases are just really fast.
It's simple data with no structure.
There isn't much that gets in the way between giving the data to the database and it being written to disk.
For key value databases, only the key matters.
You write a value to a key and you read a value from a key.
The value is opaque to the database.
It could be text, it could be JSON, it could be a cat picture, it doesn't matter.
In the exam, look out for any question scenarios which present simple requirements or mention data which is just names and values or pairs or keys and values.
Look out for questions which suggest no structure.
If you see any of these type of scenarios, then key value stores are generally really appropriate.
Key value stores are also used for in-memory caching.
So if you see any questions in the exam that talk about in-memory caching, then key value stores are often the right way to go.
And I'll be introducing some products later in the course which do provide in-memory key value storage.
Okay, so let's move on.
And the next type of database that I want to talk about is actually a variation of the previous model, so a variation on key value.
And it's called a wide column store.
Now, this might look familiar to start with.
Each row or item has one or more keys.
Generally, one of them is called the partition key.
And then optionally, you can have additional keys as well as the partition key.
Now, DynamoDB, which is an AWS example of this type of database, this secondary key is called the sort or the range key.
It differs depending on the database, but most examples of wide column stores generally have one key as a minimum, which is the partition key.
And then optionally, every single row or item in that database can have additional keys.
Now, that's really the only rigid part of a wide column store.
Every item in a table has to have the same key layout.
So that's one key or more keys.
And they just need to be unique to that table.
Wide column stores offer groupings of items called tables.
But they're still not the same type of tables as in relational database products.
They're just groupings of data.
Every item in a table can also have attributes.
But, and this is really important, they don't have to be the same between items.
Remember how in relational database management systems, every table had attributes and then every row in that table had to have a value for every one of those attributes.
That is not the case for most no SQL databases and specifically wide column stores because that's what we're talking about now.
In fact, every item can have any attribute.
It could have all of the attributes, so all of the same attributes between all of the items.
It could have a mixture, so mix and matching attributes on different items.
Or an item could even have no attributes.
There is no schema, no fixed structure on the attribute side.
It's normally partially opaque for most database operations.
The only thing that matters in a wide column store is that every item inside a table has to use the same key structure and it has to have a unique key.
So whether that's a single partition key or whether it's a composite key, so a partition key and something else.
If it's a single key, it has to be unique.
If it's a composite key, the combination of both of those values has to be unique.
That's the only rule for placing data into a table using wide column stores.
Now DynamoDB inside AWS is an example of this type of database.
So DynamoDB is a wide column store.
Now this type of database has many users.
It's very fast.
It's super scalable.
And as long as you don't need to run relational operations such as SQL commands on the database, it often makes the perfect database product to take advantage of, which is one of the reasons why DynamoDB features so heavily amongst many web scale or large scale projects.
Okay, so let's move on.
And next I want to talk about a document database.
And this is a type of no-SQL database that's designed to store and query data as documents.
Documents are generally formatted using a structure such as JSON or XML.
But often the structure can be different between documents in the same database.
You can think of a document database almost like an extension of a key value store where each document is interacted with via an ID that's unique to that document.
But the value, the document contents, are exposed to the database allowing you to interact with it.
Document databases work best for scenarios like order databases or collections or contact style databases, situations where you generally interact with the data as a document.
Document databases are also great when you need to interact with deep attributes, so nested data items within a document structure.
The document model works well with use cases such as catalogs, user profiles, and lots of different content management systems where each document is unique but it changes over time.
So it might have different versions.
Documents might be linked together in hierarchical structures or when you're linking different pieces of content in a content management system.
For any use cases like this, document style databases are perfect.
Each document has a unique ID and the database has access to the structure inside the document.
Document databases provide flexible indexing so you can run really powerful queries against the data that could be nested deep inside a document.
Now let's move on.
Column databases are the next type of database type that I want to discuss.
And understanding the power of these databases requires knowing the limitations of their counterpart, row-based databases, which is what most SQL-based databases use.
Row-based databases are where you interact with data based on rows.
So in this example we have an orders table.
It has order ID, the product ordered, color, size, and price.
For every order we have a row and those rows are stored on disk together.
If you needed to read the price of one order from the database, you read the whole row from disk.
If you don't have indexes or shortcuts, you'll have to find that row first and that could mean scanning through rows and rows of data before you reach the one that you want to query.
Now if you want to do a query which operates over lots of rows, for example you wanted to query all the sizes of every order, then you need to go through all of the rows, finding the size of each.
Row-based databases are ideal when you operate on rows, creating a row, updating a row, or deleting rows.
Row-based databases are often called OLTP or Online Transaction Processing Databases and they are ideal as the name suggests for systems which are performing transactions.
So order databases, contact databases, stock databases, things which deal in rows and items where these rows and items are constantly accessed, modified, and removed.
Now column-based databases handle things very differently.
Instead of storing data in rows on disk, they store it based on columns.
The data is the same but it's grouped together on disk based on column.
So every order value is stored together, every product item, every color, size, and price, all grouped by the column that the data is in.
Now this means two things.
First, it makes it very, very inefficient for transaction style processing which is generally operating on whole rows at a time.
But this very same aspect makes column databases really good for reporting.
So if your queries relate to just one particular column because that whole column is stored on disk grouped together, then that's really efficient.
You could perform a query to retrieve all products sold during a period or perform a query which looks for all sizes sold in total ever and looks to build up some intelligence around which are sold most and which are sold least.
With column store databases, it's really efficient to do this style of querying, reporting style querying.
An example of a column based database in AWS is Redshift which is a data warehousing product and that name gives it away.
Generally what you'll do is take the data from an OLTP database, a row based database, and you'll shift that into a column based database when you're wanting to perform reporting or analytics.
So generally column store databases are really well suited to reporting and analytics.
Now lastly I want to talk about graph style databases.
Earlier in the lesson I talked about tables and keys and how relational database systems handle the relationships by linking the keys of different tables.
Well with graph databases, relationships between things are formally defined and stored in the database itself along with the data.
They're not calculated each and every time you run a query.
And this makes them great for relationship driven data.
For example social media or HR systems.
Consider this data, three people, two companies and a city.
These are known as nodes inside a graph database, nodes and nouns, so objects.
Nodes can have properties which are simple key value pairs of data and these are attached to the nodes.
So far this looks very much like a normal database, nothing is new so far.
But with graph databases there are also relationships between the nodes which are known as edges.
Now these edges have a name and a direction.
So Natalie works for XYZ corp and Greg works for both XYZ corp and Acme widgets.
Relationships themselves can also have attached data, so name value pairs.
In this particular example we might want to store the start date of any employment relationship.
A graph database can store a massive amount of complex relationships between data or between nodes inside a database and that's what's key.
These relationships are actually stored inside the database as well as the data.
A query to pull up details on all employees of XYZ corp would run much quicker than on a standard SQL database because that data of those relationships is just being pulled out of the database just like the actual data.
With a relational style database you'd have to retrieve the data and the relationships between the tables is computed when you execute the query.
So it's a really inefficient process with relational database systems.
These relationships are fixed and computed each and every time a query is run.
With a graph based database those relationships are fluid, dynamic, they're stored in the database along with the data and it means when you're interacting with data and looking to take advantage of these fluid relationships it's much more efficient to use a graph style database.
Now using graph databases it's very much beyond the scope of this course but I want you to be aware of it because you might see questions in the exam which mention the technology and you need to be able to identify or eliminate answers based on the scenario, based on the type of database that the question is looking to implement.
So if you see mention of social media in an exam or systems with complex relationships then you should think about graph databases first.
Now that's all I wanted to cover in this lesson.
I know it's been abstract and high level.
I wanted to try and make it as brief as possible.
I know I didn't really succeed because we had a lot to cover but I want this to be a foundational set of theory that you can use throughout the databases section and it will help you in the exam.
For now though that's everything I wanted to cover in this lesson so go ahead complete the video and when you're ready you can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this first technical lesson of this section of the course, I wanted to provide a quick fundamentals lesson on databases.
If you already have database experience then you can play me on super fast speed and think of this lesson as a good confirmation of the skills that you already have.
If you don't have database experience though, that's okay.
This lesson will introduce just enough knowledge to get you through the course and I'll include additional reading material to get you up to speed with databases in general.
Now we do have a fair amount to get through so let's jump in and get started.
Databases are systems which store and manage data.
But there are a number of different types of database systems and crucial differences between how data is physically stored on disk and how it's managed on disk and in memory, as well as how the systems retrieve data and present it to the user.
Database systems are very broadly split into relational and non-relational.
Relational systems are often referred to as SQL or SQL.
Now this is actually wrong because SQL is a language which is used to store, update and retrieve data.
It's known as the structured query language and it's a feature of most relational database platforms.
Strictly speaking, it's different than the term relational database management system but most people use the two interchangeably.
So if you see or hear the term SQL or RDBMS which is relational database management system, they're all referring to relational database platforms.
Most people use them interchangeably.
Now one of the key identifiable characteristics of relational database systems is that they have a structure to their data.
So that's inside and between database tables and I'll cover that in a moment.
The structure of a database table is known as a schema and with relational database systems it's fixed or rigid.
That means it's defined in advance before you put any data into the system.
A schema defines the names of things, valid values of things and the types of data which are stored and where.
More importantly, with relational database systems there's also a fixed relationship between tables.
So that's fixed and also defined in advance before any data is entered into the system.
Now no SQL on the other hand.
Well let's start by making something clear.
No SQL isn't one single thing.
No SQL as the name suggests is everything which doesn't fit into the SQL mold.
Everything which isn't relational.
But that represents a large set of alternative database models which I'll cover in this lesson.
One common major difference which applies to most no SQL database models is that generally there is a much more relaxed concept of a schema.
Generally they all have weak schemers or no schemers and relationships between tables are also handled very differently.
Both of these impact the situations that a particular model is right for and that's something that you need to understand at a high level for the exam and also when you're picking a database model for use in the real world.
Before I talk about the different database models I wanted to visually give you an idea of how relational database management systems known as RDBMSs or SQL systems conceptualize the data that you store within them.
Consider an example of a simple PET database.
You have three humans and for those three humans you want to record the PETs that those humans are owned by.
The key component of any SQL based database system is a table.
Now every table has columns and these are known as attributes.
The column has a name, its attribute name and then within each row of that table each column has to have a value and this is known as the attribute value.
So in this table for example the columns are f name, first name, l name, last name and age and then for each of the rows 1, 2 and 3 the row has an attribute value for each of the columns.
So each of the attributes which are the columns have an attribute value in each row.
Now generally the way that data is modeled in a relational database management system or a SQL database system is that data which relates together is stored within a table.
So in this case all of the data on the humans is stored within one table.
Every row in the table has to be uniquely identifiable and so we define something that's known as a primary key.
This is unique in the table and every row of that table has to have a unique value for this attribute.
So note in this table how every row has a unique value 1, 2 and 3 for this primary key.
Now with this database model we've also got a similar table for the animals.
So we've got whiskers and woofy and they also have a primary key that's been defined which is the animal or AID.
And this primary key on this table also has to have a unique value in every row on the table.
So in this case whiskers is animal ID 1 and woofy is animal ID 2.
Each table in a relational database management system can have different attributes but for a particular table every row in that table needs to have a value stored for every attribute in that table.
So see how the animals table has name and types whereas the human table has first name, last name and age.
But note how in both tables for every row every attribute has to have a value.
Because SQL systems are relational we generally define relationships between the tables.
Now this is a join table.
It makes it easy to have many to many relationships.
So a human could have many animals and each animal can have many human minions.
A join table has what's known as a composite key which is a key formed of two parts.
And for composite keys together they have to be unique.
So notice how the second and third rows have the same animal ID.
That's fine because the human ID is different.
As long as the composite key in its entirety is unique that's also fine.
Now the keys in different tables are how the relationships between the tables are defined.
So in this example the human table has a relationship with the join table.
It allows each human to have multiple animals and each animal to have multiple humans.
In this example the animal ID of two which is woofy is linked to human ID two and three which is Julie and James.
They're both woofies minions because that doggo needs double the amount of tasty treats.
Now all these keys and the relationships are defined in advance.
This is done using the schema.
It's fixed and it's very difficult to change after the first piece of data goes in.
The fact that this schema is so fixed and has to be declared in advance makes it difficult for a sequel or a relational system to store any data which has rapidly changing relationships.
And a good example of this is a social network such as Facebook where relationships change all the time.
So this is a simple example of a relational database system.
It generally has multiple tables, a table stores data which is related so humans and animals.
Tables have fixed schemas.
They have attributes.
They have rows.
Each row has a unique primary key value and has to contain some value for all of the attributes in the table.
And in those tables they have relationships between each other which are also fixed and defined in advance.
So this is sequel.
This is relational database modelling.
Okay so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead complete the video and when you're ready join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk through how we implement DNSSEC using Route 53.
Now if you haven't already watched my DNS and DNSSEC fundamentals video series you should pause this video and watch those before continuing.
Assuming that you have let's jump in and get started.
Now you should be familiar with this architecture.
This is how Route 53 works normally.
In this example I'm using the animals for live.org domain so a query against this would start with our laptop, go to a DNS resolver then to the root servers looking for details of the .org top level domain and then it would go to the .org top level domain name servers looking for animals for live.org and then it would proceed to the four name servers which are hosting the animals for live.org zone using Route 53.
On the right hand side here we have an AWS VPC using the plus two address which is the Route 53 resolver and those instances can query the animals for live.org domain from inside the VPC.
Now enabling DNSSEC on a Route 53 hosted zone is done from either the Route 53 console UI or the CLI and once initiated the process starts with KMS.
This part can either be done separately or as part of enabling DNSSEC signing for the hosted zone but in either case an asymmetric key pair is created within KMS meaning a public part and a private part.
Now you can think of these conceptually as the key signing keys or KSKs but in actual fact the KSK is created from these keys.
These aren't the actual keys but this is a nuance which isn't required at this level.
So these keys are used to create the public and private key signing keys which Route 53 uses and these keys need to be in the US East 1 region that's really important so keep that in mind.
Next Route 53 creates the zone signing keys internally.
This is really important to understand both the creation and the management of the zone signing keys is handled internally within Route 53.
KMS isn't involved.
Next Route 53 adds the key signing key and the zone signing key public parts into a DNS key record within the hosted zone.
This tells any DNSSEC resolvers which public keys to use to verify the signatures on any other records in this zone.
Next the private key signing key is used to sign those DNS key records and create the RRSIG DNS key record and these signatures mean that any DNSSEC resolver can verify that the DNS key records are valid and unchanged.
Now at this point that's signing within the zone configured which is step one.
Next Route 53 has to establish the chain of trust with the parent zone.
The parent zone needs to add a DS record or delegated signer record which is a hash of the public part of the key signing key for this zone and so we need to make this happen.
Now how we do this depends on if the domain is registered via Route 53.
If so the registered domains area of the Route 53 console or the equivalent CLI command can be used to make this change.
Route 53 will liaise with the appropriate top level domain and add the delegated signer record.
Now if we didn't register the domain using Route 53 and are instead just using it to host the zone then we're going to need to perform this step manually.
Once done the top level domain in this case.org will trust this domain via the delegated signer record which as I mentioned is a hash of the domains public key signing key and the domain zone will sign all records within it either using the key signing or zone signing keys.
As part of enabling this you should also make sure to configure cloud watch alarms.
Specifically create alarms for DNSSEC internal failure and DNSSEC key signing keys needing action.
Both of these indicate a DNSSEC issue with the zone which needs to be resolved urgently.
Either an issue with the key signing key itself or a problem interacting with KMS.
Lastly you might want to consider enabling DNSSEC validation for VPCs.
This means for any DNSSEC enabled zones if any records fail validation due to a mismatch signature or otherwise not being trusted they won't be returned.
This doesn't impact non-DNSSEC enabled zones which will always return results and this is how to work with the Route 53 implementation of DNSSEC.
What I wanted to do now is step you through an actual implementation of DNSSEC for a hosted zone within Route 53 and to do that we're going to need to move across to my AWS console.
Okay so now we're at the AWS console and I'm going to step you through an example of enabling DNSSEC on a Route 53 domain and to get started I'm going to make sure I'm in an AWS account where I have admin permissions.
In this case I'm logged in as the I am admin user which is an I am identity with admin permissions.
As always I'm going to make sure that I have the Northern Virginia region selected and once I've done that I'm going to go ahead and open Route 53 in a new tab.
In my case it's already inside recently visited services if it's not you can just search for it in the search box at the top but I'm going to go ahead and open Route 53.
So I'm going to go to the Route 53 console and click on hosted zones and in my case I've got two hosted zones animals for life.org and animals for life 1337.org.
I'm going to go ahead and DNSSEC enable animals for life.org so I'm going to go ahead and go inside this hosted zone.
Now if I just go ahead and move across to my command prompt and if I run this command so dig animals for life.org and then DNS key with plus DNSSEC this will query this domain attempting to look for any DNS key records using DNSSEC and as you can see there are no DNSSEC results returned which is logical because this domain is not enabled for DNSSEC.
So moving back to this console I'm going to click on DNSSEC signing under the domain and then click on enable DNSSEC signing.
Now if this were a production domain the order of these steps really matters and you need to make sure that you wait for certain time periods before conducting each of these steps.
Specifically you need to make sure that you're making changes taking into consideration the TTL values within your domain.
I'll include a link attached to this video which details all of these time critical prerequisites that you need to make sure you consider before enabling DNS signing.
In my case I don't need to worry about that because this is a fresh empty domain.
The first thing we're going to do is create a key signing key and as I mentioned earlier in this video this is done using a KMS key.
So the first thing I'm going to do is to specify a KSK name so a key signing key name and I'm going to call it A4L KSK for Animals for Life key signing key.
Next you'll need to decide on which key to use within KMS to create this key signing key.
Now unfortunately the user interface is a little bit inconsistent.
AWS have decided to rename CMKs to KMS keys so you might see the interface looking slightly different when you're doing this video.
Regardless you need to create a KMS key so check the box saying create customer managed CMK or create KMS key depending on what state the user interface is in and you'll need to give a name to this key and again this is creating an asymmetric KMS key.
So I'm going to call it A4L KSK and then KMS key and once I've done that I can go ahead and click on create KSK and enable signing.
Now behind the scenes this is creating an asymmetric KMS key and using this to create the key signing key pair that this hosted zone is going to use.
Now this part of the process can take a few minutes and so I'm going to skip ahead until this part has completed.
Okay so that's completed and that means that we now have an active key signing key within this hosted zone and that means it's also created a zone signing key within this hosted zone.
If I go back to my terminal and I rerun this same command and press enter you'll see that we still get the same empty results and this can be because of caching so I need to wait a few minutes before this will update.
If I run it again now we can see for the same query it now returns DNS key records so one for 256 which represents the zone signing key so the public part of that key pair and one for 257 which represents the key signing key again the public part of that key pair and then we have the corresponding RRSIG DNS key record which is a signature of these using the private key signing key so now internally we've got DNSSEC signing enabled for this hosted zone and what we need to do now is create the chain of trust with the parent zone in this case the .org top level domain.
Now to do that because I've also registered this domain using Route 53 I can do that from the registered domains area of the console so I'll open that in a brand new tab.
I'm going to go there and then I'm going to go to the animals for life dot org registered domain and this is the area of the console where I can make changes and Route 53 will liaise with the .org top level domain and enter those changes into the .org zone.
Now the area that I'm specifically interested in is the DNSSEC status currently this is set to disabled.
What I'm going to do is to click on manage keys and it's here where I can enter the public key specifically it's going to be the public key signing key of the animals for life dot org zone and I'm going to enter it so that it creates the delegated signer record in the .org domain which establishes this chain of trust.
So first I'm going to change the key type to KSK then I'm going to go back to our hosted zone and I'm going to click on view information to create DS record then I'm going to expand establish a chain of trust and depending on what type of registrar you used you either need to follow the bottom instructions or these if you used Route 53 and I did so I can go ahead and use the Route 53 registrar details.
Now the first thing you need to do is to make sure that you're using the correct signing algorithm so it's ecds ap 256 char 256 so I'm going to go ahead and move back to the registered domains console click on the algorithm drop down and select the matching signing algorithm so ecds ap 256 char 256.
Next I'll go back and I'll need to copy the public key into my clipboard so remember a delegated signer record is just a hash of the public part of the key signing key so this is what I'm copying into my clipboard this is the public key of this key signing key so I'm going to go back and paste this in and then click on add now this is going to initiate a process where Route 53 are going to make the changes to the animals for life dot org part of the .org top level domain zone so specifically in the .org top level domain zone there's going to be an entry for animals for life dot org by default for normal DNS this is going to contain name server records which delegate through to Route 53 what this process is going to do is also add a DS record which is a delegated signer record and it is going to contain a hash of this public key now that will take a few minutes to take effect that's not a process which only involves Route 53 it also involves the .org top level domain so it can take a few minutes it can actually take anywhere up to a few hours and depends somewhat on the top level domain as well as the relationship with that entity which Route 53 has so I'm just going to go ahead and refresh this we can see that the DNS sec status has changed and we've now got this entry so now if I move back to my terminal I'm going to clear the screen to make it easier to see and now I'm going to run this command so dig space org because I want to do a query on the org top level domain space NS and then a space plus short and this will give me a listing at the authoritative name servers for the .org top level domain and I'm going to pick one of those servers so I'm gonna go ahead and pick the top one then I'm going to run this command so dig animals for live.org which is my hosted zone then a space I want to query DS records so delegated signers then an at sign and then the host name of the .org top level domain name server so this is going to mean that we're querying one specific name server and then press enter now if you don't see any DS record returned again it's because of DNS caching and again there is a delay between when you make the changes within Route 53 and when this takes effect within the DNS hierarchy so I'll clear the screen again rerun that command again I'm not getting any DS record returned so at this point I'm going to skip ahead to when the .org top level domain have updated with the changes that we've just made now in my case after about five more minutes this was added so note off from the same command so dig space animals for live.org and then DS for delegated signer and then at and then one of the .org TLD name servers so I'm querying for the delegated signer record for my domain and we can see here in the answers section we've got it here so animals for life DS for delegated signer and then here we've got the record and this record contains a hash of my public key signing key that's used for the animals for live.org domain so now I've established the chain of trust from the .org top level domain through to my hosted zone inside AWS and then because I've enabled the NSX signing it means if I create any records within this domain then they too will be signed so for test that if I go ahead and click on create record I'm going to use simple routing define a simple record going to call this test so test dot animals for live.org it's going to be an a record type I'm going to choose IP address or another value and then I'm just going to enter a test IP address so 1.1.1.1 and a TTL of one minute and then I'm going to define that simple record and create the record now for just refresh this inside the hosted zone for the UI you won't see anything which looks different however if I move back to my terminal clear the screen to make it easy to see and do a dig space test dot animals for live.org and then a space and then a and press enter that's going to do a normal DNS query for this a record so we can see it test animals for live dot org it's an a record and then it points at 1.1.1.1 if I run the same command only now put plus DNS sec and press enter now we can see in addition to the normal DNS query result now we have the DNS sec RR sick and this is a signature of the record above using the private part of the zone signing key and this signature can be verified using the DNS key record which contains the public zone signing key so now we have an end-to-end chain of trust from the DNS route all the way through to this resource record now that's everything I wanted to cover in this video I just wanted to give you an overview of how to implement a DNS sec within route 53 both from a theory and practical perspective at this point that's the end of the video so go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to cover Route 53 interoperability.
What I mean by that is using Route 53 to register domains or to host zone files when the other part of that is not with Route 53.
Generally both of these things are performed together by Route 53 but it's possible for Route 53 just to do one or the other.
So let's start by stepping through exactly what I mean.
When you register a domain using Route 53 it actually does two jobs at the same time.
While these two jobs are done together conceptually they're two different things.
Route 53 acts as a domain registrar and it provides domain hosting so it can do both which is what happens initially when you register a domain or it can be a domain registrar alone or it can host domains alone.
It might only do one of them if for example you register a domain elsewhere and you want to use it with Route 53 and I want to take the time in this video to explain those edge case scenarios.
So let's quickly step through what happens when you register a domain using Route 53.
So first it accepts your money the domain registration fee.
This is a one-off fee or more specifically a once a year or once every three-year fee for actually registering the domain.
Next it allocates for Route 53 DNS servers called name servers then it creates a zone file which it hosts on the four name servers that I've just talked about.
So that's the domain hosting part allocating those servers and creating and hosting the zone file.
If you hear me mention domain hosting that's what it means.
Then once the domain hosting is sorted Route 53 communicates with the registry for the specific top level domain that you're registering your domain within so they have a relationship with the registry.
So Route 53 is acting as the domain registrar the company registering the domain on your behalf with the domain registry and the domain registry is the company or entity responsible for the specific top level domain.
So Route 53 gets the registry to add an entry for the domain say for example animalsforlife.org.
Inside this entry it adds four name server records and it points these records at the four name servers that I've just been talking about.
This is the domain registrar part.
So the registrar registers the domain on your behalf that's one duty and then another entity provides DNS or domain hosting and that's another duty.
Often these are both provided by Route 53 but they don't have to be so fix in your mind these two different parts the registrar which is the company who registers the domain on your behalf and the domain hosting which is how you add and manage records within hosted zones.
So let's step through this visually looking at a few different options.
First we have a traditional architecture where you register and host a domain using Route 53.
So on the left conceptually we have the registrar role and this is within the registered domains area of the Route 53 console.
On the right we have the DNS hosting role and this is managed in the public hosted zone part of the Route 53 console.
So step one is to register a domain within Route 53.
For now let's assume that it's the animalsforlife.org domain.
So you liaise with Route 53 and you pay the fee required to register a domain which is a per year or per three year fee.
Now assuming nobody else has registered the domain before the process continues.
First the Route 53 registrar liaisers with the Route 53 DNS hosting entity and it creates a public hosted zone which allocates for Route 53 name servers to host that zone which are then returned to the registrar.
I want to keep driving home that conceptually the registrar and the hosting are separate functions of Route 53 because it makes everything easier to understand.
Once the registrar has these four name servers it passes all of this along through to the .org top level domain registry.
The registry is the manager of the .org top level domain zone file and it's via this entity that records are created in the top level domain zone for the animalsforlife.org domain.
So entries are added for our domain which point at the four name servers which are created and managed by Route 53 and that's how the domain becomes active on the public DNS system.
At this point we've paid once for the domain registration to the registrar which is Route 53 and we also have to pay a monthly fee to host the domain so the hosted zone and with this architecture this is also paid to Route 53.
So this is a traditional architecture and this is what you get if you register and host a domain using Route 53 and this is a default configuration.
So when you register a domain while you might see it as one step it's actually two different steps done by two different conceptual entities the registrar and the domain hoster and it's important to distinguish between these two whenever you think about DNS.
But now let's have a look at two different configurations where we're not using Route 53 for both of these different components.
This time Route 53 is acting as a registrar only so we still pay Route 53 for the domain they still liaise on our behalf with the registry for the top level domain but this time a different entity is hosting the domain so the zone file and the name servers and let's assume for this example it's a company called Hover.
This architecture involves more manual steps because the registrar and the DNS hosting entity are separate so as the DNS admin you would need to create a hosted zone.
The third party provider would generally charge a monthly fee to host that zone on name servers that they manage.
You would need to get the details of those servers once it's been created and pass those details on to Route 53 and Route 53 would then liaise with the .org top level domain registry and set those name server records within the domain to point at the name servers managed in this case by Hover.
With this configuration which I'll admit I don't see all that often in the wild the domain is managed by Route 53 but the zone file and any records within it are managed by the third party domain hosting provider in this case Hover.
Now the reason why I don't think we see this all that often in the wild is the domain registrar functionality that Route 53 provides it's nothing special.
With this architecture you're not actually using Route 53 for domain hosting and domain hosting is the part of Route 53 which adds most of the value.
If anything this is the worst way to manage domains let's look at another more popular architecture which I see fairly often in the wild and that's using Route 53 for domain hosting only.
Now you might see this either when a business needs to register domains via a third party provider maybe they have an existing business deal or business discount or you might have domains which have already been historically registered with another provider and where you want to get the benefit that Route 53 DNS hosting provides.
With this architecture the domain is registered via a third party domain registrar in this case Hover so it's the registrar in this example who liais with the top level domain registry but we use Route 53 to host the domain.
So at some point either when the domain is being created or afterwards we have to create a public hosted zone within Route 53.
This creates the zone and the name servers to host the zone obviously for a monthly fee.
So once this has been created we pass those details through to the registrar who liais with the registry for the top level domain and then those name server records are added to the top level domain meaning the hosted zone is now active on the public internet.
Now it's possible to do this when registering the domain so you could register the domain with Hover and immediately provide Route 53 name servers or you might have a domain that's been registered years ago and you now want to use Route 53 for hosting and record management.
So you can use this architecture either while registering a domain or after the fact by creating the public hosted zone and then updating the name server records in the domain via the third party registrar and then the dot org registry.
Now I know that this might seem complex but if you just keep going back to basics and thinking about Route 53 as two things then it's much easier.
Route 53 offers a component which registers the domain so this is the registrar and it also offers a component which hosts the zone files and provides managed DNS name servers.
If you understand that both of those are different things and when you normally register a domain using Route 53 both of them are being used.
A hosted zone is created for you and then via the registrar part it's added to the domain record by the top level domain registry.
If you understand that so see these as two completely different components then it's easy to understand how you can use Route 53 for only one of them and a separate third-party company for the other.
Now generally I think Route 53 is one of the better DNS providers on the market and so generally for my own domains I will use Route 53 for both the registrar and the domain hosting components but depending on your architecture, depending on any legacy configuration, you might have a requirement to use different entities for these different parts and that's especially important if you're a developer looking at writing applications that take advantage of DNS or if you're an engineer looking to implement or fault find these type of architectures.
Now with that being said that's everything I wanted to cover in this theory video I just wanted to give you a brief overview of some of the different types of scenarios that you might find in more complex Route 53 situations.
At this point go ahead and complete this video and when you're ready.
I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to talk about Geoproximity routing which is another routing policy available within Route 53.
So let's just jump in and get started.
Geoproximity aims to provide records which are as close to your customers as possible.
If you recall latency based routing provides the record which has the lowest estimated latency between your customer and the region that the record is in.
Geoproximity aims to calculate the distance between a customer and a record and answer with the lowest distance.
Now it might seem similar to latency but this routing policy works on distance and also provides a few key benefits which I'll talk about in this video.
When using Geoproximity you define rules so you define the region that a resource is created in if it's an AWS resource or provide the latitude and longitude coordinates if it's an external resource.
You also define a bias but more on that in a second.
Let's say that you have three resources one in America, one in the UK and one in Australia.
Well we can define rules which means that requests are routed to those resources.
If these were resources in AWS we could define the region that the resources were located in so maybe US East 1 or AP South East 2.
If the resources were external so non-AWS resources we could define their location based on coordinates but in either case Route 53 knows the location of these resources.
It also knows the location of the customers making the requests and so it will direct those requests at the closest resource.
Now we're always going to have some situations where customers in countries without any resources are using our systems.
In this case Saudi Arabia which is over 10,000 kilometers away from Australia and about 6,700 kilometers away from the UK.
Under normal circumstances this would mean that the UK resource would be returned for any users in Saudi Arabia.
What geo proximity allows us to do though is to define a bias.
So rather than just using the actual physical distance we can adjust how Route 53 handles the calculation.
We can define a plus or minus bias.
So for example with the UK we might define a plus bias meaning the effective area of service for the UK resource is increased larger than it otherwise would be.
And we could do the same for the Australian resource but maybe providing a much larger plus bias.
Now routing is distance based but it includes this bias.
So in this case we can influence Route 53 so that customers from Saudi Arabia are routed to the Australian resource rather than the UK one.
Geo proximity routing lets Route 53 route traffic to your resources based on the geographic location of your users and your resources.
But you can optionally choose to route more traffic or less traffic to a given resource by specifying a value.
The value is called a bias.
A bias expands or shrinks the size of a geographic region that is used for traffic to be routed to.
So even in the example of the UK where it's just a single relatively small country by adding a plus bias we can effectively make the size larger.
So that more surrounding countries route towards that resource.
In the case of Australia by adding an even larger bias we can make it so that countries even in the Middle East route towards Australia rather than the closer resource in the UK.
So geo proximity routing is a really flexible routing type that not only allows you to control routing decisions based on the locations of your users and resources.
It also allows you to place a bias on these rules to influence those routing decisions.
So this is a really important one to understand and it will come in really handy for a certain set of use cases.
Now thanks for watching.
That's everything that I wanted to cover in this video.
Go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to talk about geolocation routing which is another routing policy available within Route 53.
Now this is going to be a pretty brief video so let's jump in and get started.
In many ways geolocation routing is similar to latency.
Only instead of latency the location of customers and the location of resources are used to influence resolution decisions.
With geolocation routing when you create records you tag the records with the location.
Now this location is generally a country so using ISO standard country codes it can be continents again using ISO continent codes such as SA for South America in this case or records can be tagged with default.
Now there's a fourth type which is known as a subdivision.
In America you can tag records with the state that the record belongs to.
Now when a user is making a resolution request an IP check verifies the location of the user.
Depending on the DNS system this can be the user directly or the resolver server but in most cases these are one and the same in terms of the user's location.
So we have the location of the user and we have the location of the records.
What happens next is important because geolocation doesn't return the closest record it only returns relevant records.
When a resolution request happens Route 53 takes the location of the user and it starts checking for any matching records.
First if the user doing the resolution request is based in the US then it checks the state of the user and it tries to match any records which have a state allocated to them.
If any records match they're returned and the process stops.
If no state records match then it checks the country of the user.
If any records are tagged with that country then they're returned and the process stops.
Then it checks the continent.
If any records match the continent that the user is based in then they're returned and the process stops.
Now you can also define a default record which is returned if no record is relevant for that user.
If nothing matches though so there are no records that match the user's location and there's no default record then a no answer is returned.
So to stress again this type of routing policy does not return the closest record it only returns any which are applicable or the default or it returns no answer.
So geolocation is ideal if you want to restrict content.
For example providing content for the US market only.
If you want to do that then you can create a US record and only people located in the US will receive that record as a response for any queries.
You can also use this policy type to provide language specific content or to load balance across regional endpoints based on customer location.
Now one last time because this is really important for the exam and for real world usage.
This routing policy type is not about the closest record geolocation returns relevant locations only.
You will not get a Canadian record returned if you're based in the UK and no closer records exist.
The smallest type of record is a subdivision which is a US state then you have country then you have continent and finally optionally a default record.
Use the geolocation routing policy if you want to route traffic based on the location of your customers.
Now it's important that you understand which is why I've stressed this so much that geolocation isn't about proximity.
It's about location.
You only have records returned if the location is relevant.
So if you're based in the US but are based in a different state than a record you won't get that record.
If you're based in the US and there is a record which is tagged as the US as a country then you will get that record returned.
If there isn't a country specific record but there is one for the continent that you're in you'll get that record returned and then the default is a catchall.
It's optional if you choose to add it then it's returned if your user is in a location where you don't have a specific record tagged to that location.
Now that's everything that I wanted to cover in this video.
Thanks for watching.
Go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to talk about latency based routing which is yet another routing policy available within Route 53.
So let's jump in and get started.
Latency based routing should be used when you're trying to optimize for performance and user experience.
When you want Route 53 to return records which can provide better performance.
So how does it work?
Well it starts with a hosted zone within Route 53 and some records with the same name.
So in this case www, three of those records, they're A records and so they point at IP addresses.
In addition for each of the records you can specify a record region.
So US East 1, US West 1 and AP Southeast 2 in this example.
Latency based routing supports one record with the same name for each AWS region.
The idea is that you're specifying the region where the infrastructure for that record is located.
Now in the background AWS maintains a database of latencies between different regions of the world.
So when a user makes a resolution request it will know that that user is in Australia in this example.
It does this by using an IP lookup service and because it has a database of latencies it will know that a user in Australia will have a certain latency to US East 1, a certain latency to US West 1 and hopefully the lowest latency to a record which is tagged to be in the Asia Pacific region.
So AP Southeast 2.
So that record is selected and it's returned to the user and used to connect to resources.
Latency based routing can also be combined with health checks.
If a record is unhealthy then the next lowest latency is returned to the client making the resolution request.
This type of routing policy is designed to improve performance for global applications by directing traffic towards infrastructure with the best, so lowest latency for users accessing that application.
It's worth noting though that the database which AWS maintain isn't real time.
It's updated in the background and doesn't really account for any local networking issues but it's better than nothing and can significantly help with performance of your applications.
Now that's all of the theory that I wanted to cover about latency based routing.
So go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this video I want to talk about weighted routing, which is another routing policy available within Route 53.
So let's jump in and get started straight away.
Weighted routing can be used when you're looking for a simple form of load balancing or when you want to test new versions of software.
Like all other types of routing policy it starts with a hosted zone and in this hosted zone you guessed it records.
In this case three www records.
Now these are all a records and so they point at IP addresses and let's assume that these are three EC2 instances.
With weighted routing you're able to specify a weight for each record and this is called the record weight.
Let's assume 40 for the top record, 40 for the middle and 20 for the record at the bottom.
Now how this record weight works is that for a given name www in this case the total weight is calculated.
So 40 plus 40 plus 20 for a total of 100.
Each record then gets returned based on its weighting versus the total weight.
So in this example it means that the top record is returned 40% of the time, the middle also 40% of the time and the bottom record gets returned 20% of the time.
Setting a record weight to zero means that it's never returned so you can do this if temporarily you don't want a particular record to be returned unless all of the records are set to zero in which case they're all returned.
So any of the records with the same name are returned based on its weight versus the total weight.
Now I've kept this example simple by using record weights that total 100 so it makes it easy to view them as percentages but the same formula is used regardless.
An individual record is returned based on its weight versus the total weight.
Now you can combine weighted routing with health checks and if you do so when a record is selected based on the above weight calculation if that record is unhealthy then the process repeats.
It's skipped over until a healthy record is selected and then that one's returned.
Health checks don't remove records from the calculation and so don't adjust the total weight.
The process is followed normally but if an unhealthy record is selected to be returned it's just skipped over and the process repeats until a healthy record is selected.
Now weighted routing as I mentioned at the start is great for very simple load balancing or when you want to test new software versions.
If you want to have 5% of resolution requests go to a particular server which is running a new version of Catergram then you have that option.
So weighted routing is really useful when you have a group of records with the same name and want to control the distribution so the amount of time that each of them is returned in response to queries.
Now that's everything I wanted to cover in this video so go ahead finish the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this video, I want to talk about multivalu routing, which is another routing policy available within Route 53.
So let's jump in and get started.
Multivalu routing in many ways is like a mixture between simple and failover, taking the benefits of each and merging them into one routing policy.
With multivalu routing, we start with a hosted zone, and with multivalu routing, you can actually create many records all with the same name.
In this case, we have three www records, and each of those records in this example is an a record, which maps onto an IP address.
Each of the records when using this routing type can have an associated health check, and when queried, up to eight healthy records are returned to the client.
If you have more than eight records, then eight are selected at random.
Now at this point, the client picks one of those values and uses it to connect to the resource.
Because each of the records is health checked, any of the records which fail the check, such as the bottom record in this example, won't be returned to the client, and won't be selected by the client when connecting to resources.
So multivalu routing, it aims to improve availability by allowing a more active, active approach to DNS.
You can use it if you have multiple resources, which can all service requests, and you want to select one at random.
Now it's not a substitute for a load balancer, which handles the actual connection process from a network perspective, but the ability to return multiple health checkable IP addresses is a way to use DNS to improve availability of an application.
So simple routing has no health checks and is generally used for a single resource, such as a web server.
Failover is used for active backup architectures, commonly with an S3 bucket as a backup, whereas multivalu is used when you have many resources which can all service requests, and you want them all health checked and then returned at random.
So any healthy records will be returned to the client.
If you have more than eight, they'll be returned randomly.
OK, so that's everything for this type of routing policy.
Go ahead and complete the video when you're ready, and I'll look forward to you joining me in the next video.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this video I want to quickly step through a topic which confuses people who are new to DNS and Route 53 and that's the difference between C names and alias records.
Now I've seen exam questions which test your understanding of when to use one versus the other, so let's quickly go through the key things which you need to know.
Now let's start by describing the problem that we have if we only use C names.
So in DNS an A record maps a name to an IP address, for example the name Categor.io to the IP address 1.3.3.7.
By now that should make sense.
A C name on the other hand maps a name to another name, so if you had the above A record for Categor.io then you could create a C name record for Categor.io, pointing at Categor.io.
It's a way to create another alternative name for something within DNS.
The problem is that you can't use a C name for the apex of a domain, also known as the naked domain.
So you couldn't have a C name record for Categor.io pointing at something else, it just isn't supported within the DNS standard.
Now this is a problem because many AWS services such as Elastic Load Balancers, they don't give you an IP address to use, they give you a DNS name.
And this means that if you only use C names, pointing the naked Categor.io at an Elastic Load Balancer wouldn't be supported.
You could point www.categor.io at an Elastic Load Balancer because using a C name for a normal DNS record is fine, but you can't use a C name for the domain Apex, also known as the naked domain.
Now this is a problem which alias records fix.
So for anything that's not the naked domain where you want to point a name at another name, C name records are fine.
They might not be optimal as I'll talk about in a second, but they will work.
For the naked domain known as the Apex of a domain, if you need to point at another name such as Elastic Load Balancers, you can't use C names.
But let's go through the solution, alias records.
An alias record generally maps a name onto an AWS resource.
Now it has other functions, but at this level let's focus on the AWS resource part.
Alias records can be used for both the naked domain known as the domain Apex or for normal records.
For normal records such as www.categor.io, you could use C names or alias records in most cases.
But for naked domains known as the domain Apex, you have to use alias records if you want to point at AWS resources.
For AWS resources, AWS try to encourage you to use alias records and they do this by making it free for requests made where an alias record points at an AWS resource.
So generally in most production situations and for the exam default to picking alias records for anything in a domain where you're pointing at AWS resources.
Now an alias is actually a subtype.
You can have an A record alias and a C name record alias and this is confusing at first.
But the way I think about this is both of them are alias records, but you need to match the record type with the type of the record you're pointing at.
So take the example of an elastic load balancer.
With an ELB, you're given an A record for the elastic load balancer.
It's a name which points at an IP address.
So you have to create an A record alias if you want to point at the DNS name provided by the elastic load balancer.
If the record that the resource provides is an A record, then you need to use an A record alias.
So you're going to use alias records when you're pointing at AWS services such as the API gateway, cloud front, elastic beanstalk, elastic load balancers, global accelerator and even S3 buckets.
And you're going to experience this last one in a demo lesson which is coming up very soon.
Now it's going to make a lot more sense when you see it in action elsewhere in the course.
For now, I just want to make sure that you understand the theory of both the limitations of C name records and the benefits that alias records provide.
Now the alias is a type of record that's been implemented by AWS and it's outside of the usual DNS standard.
So it's something that in this form you can only use if Route 53 is hosting your domains.
Keep that in mind as I talk about more of the features of Route 53 as we move through this section of the course.
But at this point, that's everything that I wanted to cover.
So go ahead, complete this video and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Now one final thing before we finish with this demo lesson, and I want to talk about private hosted zones.
So move back to the Route 53 console.
I'm going to go to Hosted Zones, and I'm going to create a private hosted zone.
So click "Create Hosted Zone" because it's a private hosted zone, it doesn't even need to be one that I actually own.
So I'm going to call my hosted zone "IlikeDogsReally.com".
It's going to be a private hosted zone.
And for now, I'm going to associate it with the default VPC in US-East-1.
So I'm going to pick the region, US-East, and then select Northern Virginia, and then click in the VPC ID box, and we should see two VPCs listed.
One is the Animals for Life VPC, it's tagged A4L-VPC1, but I'm not going to pick this one, I'm going to pick the one without any text after it, which is the default VPC.
So once that's set, I'm going to create the hosted zone.
Then inside the hosted zone, I'm going to create a record.
The record's going to use the simple routing policy.
Click on "Next".
I'm going to define a simple record.
I'm going to call it "www".
The record type is going to be "a routes traffic to an IP version 4 address and some resources".
I'm going to click in this endpoint box and select IP address or another value, depending on record type.
And then into this box, I'm just going to put a test IP address of 1.1.1.1.
And then down at the bottom, I'm going to click "1M" to change this TTL to 60 seconds.
And I'm going to click "Define simple record".
And then finally, "Create records".
So now we have a record called "www.ilikedogsrealy.com".
So copy that into the clipboard.
Move back to the EC2 console.
Click on "Dashboard".
Click on "Instances running".
Right click, "Connect".
We're going to use EC2 "Instance connect".
And then just click on "Connect".
Now once connected, I'm going to try pinging the record which I just created.
So "Ping Space" and then paste in "www.ilikedogsrealy.com" and press "Enter".
What you should see is "Name or service not found".
The reason for this is the private hosted zone which we created is currently associated with the default VPC.
And this instance is not in the default VPC.
To enable this instance to resolve records inside this private hosted zone, we need to associate it with the "Animals for Life" VPC.
So go back to the Route 53 console.
Expand "Hoster Zone Details" and then "Edit hosted zone".
Scroll down and we're going to add another VPC.
In the region drop down, "US-East-1" and then in the "Choose VPC" box select "A4L-VPC-1".
Scroll down and save changes.
Now this might take a few seconds to take effect, but if we go back to the EC2 instance and try to run this ping again, and we still get "Name or service not found".
So what I want you to do is go ahead and pause this video, wait for 4 or 5 minutes and then resume and try this command again.
Now in my case it took about 5 minutes, but after a while I can now ping www.ilikedogsreally.com because I've now associated this private hosted zone with the VPC that this instance is running from.
Now that's everything that I wanted to cover in this demo lesson, so all that remains is for us to clean up all of the infrastructure which we've created in this demo lesson.
So if we go back to the Route 53 console and select "Health Checks", first we're going to delete the health check.
So select "A4L Health" and click on "Delete Health Check" and confirm.
Click on "Hostered Zones".
Go inside the private hosted zone that you created.
Select the www.ilikedogsreally.com record and then click on "Delete Record".
Confirm that deletion.
Go back to "Hostered Zones".
Select the entire private hosted zone and click on "Delete".
Type "Delete" and then click to confirm.
And that will delete the entire private hosted zone.
Then go inside the public hosted zone that you have.
Select the two www records that you created earlier in this lesson.
Click on "Delete Records".
Click "Delete" to confirm.
Then go to the S3 console.
Click on the bucket that you created earlier in this lesson.
Click "Empty".
Copy and paste or type "Permanently Delete" and click on "Empty".
Once that bucket is emptied click on "Exit".
With it still selected click on "Delete".
Copy and paste or type the full name into the box and click on "Delete Bucket".
Then go to the EC2 console.
Click the hamburger menu.
Scroll down.
Click "Elastic IPs".
Select the elastic IP that you associated with the EC2 instance.
Click on the actions drop down.
Disassociate and then click to disassociate.
With it still selected click on "Actions".
Release elastic IP addresses and click on "Release".
At that point all of the manually created infrastructure has been removed.
Go back to the cloud formation console.
Go to "Stacks".
Select the stack that you created at the start of this lesson using the one click deployment.
It should be called DNS and failover demo.
Select it.
Click on "Delete".
Then click on "Delete Stack" to confirm that deletion.
Once that's deleted the account will be back in the same state as it was at the start of the lesson.
At this point that's everything I wanted to cover in this demo.
I hope it's been enjoyable and it's given you some good practical experience of how to use failover routing and private hosted zones.
That will be useful both for the exam and real world usage.
At this point that's everything so go ahead and complete this video.
When you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this demo lesson where you're going to get experience configuring fail-over routing as well as private hosted zones.
Now with this demo lesson you have the choice of either following along in your own environment or watching me perform the steps.
If you do wish to follow along in your own environment you will need a domain name that's registered within Route 53.
Remember that was an optional step at the start of this course so if you did register a domain of your own then you can do this demo lesson.
In my case I registered animals for life 1337.org.
If you registered a domain it will be different and so wherever you see me use animals for life.org you need to replace it with your registered domain.
If you didn't register one then you'll have to watch me perform all of these steps because you can't do this lesson without your own registered domain.
In order to get started you need to make sure that you're logged in as the I am admin user of the general AWS account which is the management account of the organization and you'll need to have the Northern Virginia region selected.
Now we're going to need to create some infrastructure in order to perform this demo lesson so attached to this lesson is a one-click deployment link and you should go ahead and click that link now.
That's going to take you to a quick create stack screen.
Everything should be pre-populated the stack name is DNS and failover demo all you'll need to do is scroll down to the bottom check this capabilities box and then click on create stack.
That's going to take a few minutes and it's going to create infrastructure that we're going to need to continue with the demo lesson so go ahead and pause the video wait for your stack to move into a create complete state and then we're good to continue.
Okay so the stacks now in a create complete state and it's created a number of resources the most important one being a public EC2 instance so we just need to test this first so if you just click in the search box and type EC2 and then right click to open that in a new tab.
Once you're there click on instances running and you should see a4l-web just select that under public IP version 4 just click on this symbol to copy the IP address into your clipboard make sure you don't click open address because that's going to try and use HTTPS which we don't want so copy this IP address into your clipboard and then open that in a new tab and you should see the animals for life super minimal homepage and if you see that it means everything's working as intended so go ahead and close down that tab.
Now we also need to give this instance an elastic IP address so that it has a static public IP version 4 address now to give it an elastic IP on the menu on the left scroll down to the bottom under network and security select elastic IPs and then we need to allocate an elastic IP make sure us - east - 1 is in this box scroll down and click on allocate once the elastic IP address is allocated to this account select it click on actions and then associate elastic IP once we're at this screen make sure instance is selected click in this search box and then select a4l-web once selected click in the private IP address box and select the private IP address of this instance and then check the box to say allow this elastic IP address to be re-associated once all that's complete click on associate and that now means that our EC2 instance has been allocated with a static IP version 4 address now we're configuring failover DNS and so the EC2 instance is going to be our primary record so we're going to assume that this is the animals for life main website and we want to configure an S3 bucket which is running as the backup in case this EC2 instance fails so the next thing we need to do is to create the S3 bucket so click in the search box type S3 and open that in a new tab and then go to the S3 console and at this point we're going to create an S3 bucket and configure it as a static website now the naming of the S3 bucket is important earlier in the course you should have registered a domain name in my case I registered animals for life 1337.org so I'm going to create a bucket with the name www.animalsforlife1337.org you need to create one which is called www.and then the domain name that you registered so I'm going to click and create bucket the bucket name is www.animalsforlife1337.org and it's going to be in the US east northern Virginia region which is US-East-1 then we're going to scroll down and we're going to need to uncheck block all public access because this bucket is going to be used to host a static website I'll need to acknowledge that I'm okay with that so I'll do that and then scroll all the way down to the bottom and then I'm going to click on create bucket then I'm going to go inside the bucket click on upload and then add files now attach to this lesson is an assets file I want you to go ahead and download that file then extract it and wave extracted it it should create a folder called R53 underscore zones underscore and underscore failover go inside that folder and there'll be two more folders one which is 01 underscore A4L website and another which is 02 underscore A4L failover we're interested in the A4L failover so go into that folder select both these files so index.html and minimal.jpeg click on open and then upload those files so we'll scroll down and click on upload once that's completed click on close then we go into enable static website hosting so click on properties and to enable this it's all the way down towards the bottom click on edit next to static website hosting and enable it make sure that host a static website is selected and then for the index document and the error document we're going to type index.html and once both of those are entered scroll down to the bottom and save changes now we've one final thing to do on this bucket we need to add a bucket policy so that this bucket is public so we need to click on permissions scroll down and then under bucket policy click on edit and this bucket currently does not have a bucket policy now also inside the assets folder that you extracted earlier in this lesson there's a file called bucket underscore policy.json this is the file so you'll need to copy the contents of that file into your clipboard and then inside this policy box paste that in and then click on the icon next to the bucket ARN to copy that into your clipboard and then we need to replace this placeholder with what you've just copied so I want you to select from just to the right of the first speech mark all the way through to before the forward slash so you should have ARN colon AWS colon S3 colon colon colon colon and then example bucket and then go ahead and paste the text from your clipboard which will overwrite that with the actual bucket ARN so it should look like this once you've got that scroll down and save the changes so now we have the failover website configured the static website running from the S3 bucket so now we need to go ahead and move to the route 53 console where we going to create a health check and configure the failover record so click in the search box type route 53 right click and open that in a new tab then click on health checks we're going to create a health check for the health check name type a4l health and it's going to be an end point health check scroll down we're going to specify the endpoint by IP address the protocols going to be HTTP and we need the IP address of the EC2 instance so if we go back to the EC2 console the EC2 instance is now using the elastic IP so if we scroll down and click on elastic IPs and copy the elastic IP into our clipboard then go back to the route 53 console and paste that in and the health check is going to be configured to health check the index.html document so in path we need to click and type index.html then we're going to expand advanced configuration and by default a health check is checking every 30 seconds so this is a standard health check we need to change this to fast because we want our health check to react as fast as possible if our primary website fails so select fast scroll down to the bottom click on next we don't want to create an alarm because we don't want to take any action if this health check fails we're just going to use it as part of our fail-over routing so go ahead and make sure no is selected and then click create health check now the health check is going to start off with an unknown status because it hasn't gathered enough information about the health of the primary website it's going to take a few minutes to move from this status to either healthy or unhealthy what we can do though is if we check this we can click on the health checkers tab and start to see the results of the globally distributed set of health check endpoints so we can see that we're already getting success HTTP status code 200 and this is telling us our primary website is already passing these individual checks and after a couple of minutes if we hit refresh we should see that the status changes from unknown to healthy so next we need to create the failover record so click on hosted zones locate the hosted zone for the domain that you registered at the start of the course and click it then click on create record now you can switch between two different modes either the quick create record mode or the wizard mode we're going to keep this demo simple so click on switch to wizard we're going to choose a failover record so select failover and click next we're going to call the record www we're going to set a TTL of one minute so click 1m and that will change the TTL seconds to 60 scroll down and we're going to define some failover records so click define failover record first we need to create the primary record so click in this first drop down and we're going to pick IP address or another value depending on record type so click that and then we need the elastic IP address so go back to the EC2 console and copy the elastic IP into your clip board and paste that into this box and then for failover record type this is the primary record so click on primary we need to associate it with a health check so click in that drop down and choose a for L health now once we do that it means that this primary record will only be returned if this health check is healthy otherwise the secondary record will be returned which we're going to define in a second under record ID just go ahead and type EC2 this needs to be unique within this set of records with the same name so going to call one EC2 and the other S3 so this one's EC2 so define that failover record and then we're going to define a new failover record so click that box again this time in this drop down we need to scroll down and we're going to select alias to an S3 website endpoint so select that choose the region and it needs to be US - East - 1 once selected you should be able to click in this box and see the S3 bucket that you just created so click on this to select that S3 bucket and we're going to set this as the secondary record so click on secondary we won't be associating this with a health check and we won't be evaluating the target health this record will only ever be used if the primary fails its health check and so we want this record to take effect whenever the health check associated with the primary fails and we're going to test that by shutting down the EC2 instance so this record should then take over so finally we need to enter S3 in the record ID and click on define failover record once we've done both of those we can go ahead and click on create records so now that we have both of those records in place the primary pointing at EC2 and the secondary at S3 if we copy down this full DNS name into our clipboard and open that in a new tab that should direct towards the animals for life.org super minimal homepage remember this is the website running on EC2 now what we need to do is to simulate a failure so go back to the EC2 console scroll to the top click on EC2 dashboard then instances running right click on this instance select stop instance and confirm that by clicking stop so now we've stopped this instance it should begin failing the health check so let's go back to the route 53 console click on health checks select this a4 health health check click on the health checkers tab and then click on refresh and over the coming seconds we should start to see some failure responses in this status column there we go we're getting connection timed out and over the next minute or so we should see that the status of the health check overall should move from healthy to unhealthy let's click on refresh it might take a minute or so for that to take effect so let's just give it a minute or so and now we can see that it's moved into an unhealthy state now this means that our failover record will detect this and then it's going to start returning the secondary record rather than the primary now DNS does have a cache remember we set the TTL value to 60 seconds so one minute but what we should find after that cache expires if we go back to this tab which we have open to the www.animalsforlife.org website and if we now hit refresh we should see that it changes to the animals for life.org super minimal failover page and this is the website that's running on s3 so the failover record has used a health check detected the failure of the EC2 instance and redirected us towards the backup s3 site so now we can go ahead and reverse that process if we go back to the EC2 console we can right click on this instance and start the instance that will take a few minutes to move from the stopped state through the pending state and then finally to running and once it's in a running state if we go back to the route 53 console and select this health check and then refresh on the health checkers initially we'll see a number of different messages if we keep hitting refresh over the next few minutes we should see this change to an okay message there we can see the first HTTP status code 200 if we keep refreshing we'll see more of those again more 200 statuses which means okay now that all of these are coming back okay let's click refresh on the health check itself it's still showing us unhealthy let's give it a few more seconds now it's reporting as healthy again if we go back to the tab that we have open to the website and click on refresh now it should change back to the original EC2 based website and it does so that means our failover record has worked in both directions it's failed over to s3 and failed back to EC2 okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part 2.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this video I want to cover the Health Check feature within Route 53.
Health checks support many of the advanced architectures of Route 53 and so it's essential that you understand how they work as an architect, developer or engineer.
So let's jump in and get started.
First let's quickly step through some high level concepts of Health checks.
Health checks are separate from but are used by records inside Route 53.
You don't create the checks within records.
Health checks exist separately.
You configure them separately.
They evaluate something's health and they can be used by records within Route 53.
Health checks are performed by a fleet of health checkers which are distributed globally.
This means that if you're checking the health of systems which are hosted on the public internet then you need to allow these checks to occur from the health checkers.
If you think they're bots or exploit attempts and block them then it will cause false alarms.
Health checks as I just indicated they're not just limited to just AWS targets.
You can check anything which is accessible over the public internet.
It just needs an IP address.
The checks occur every 30 seconds by default or this can be increased to every 10 seconds at an additional cost.
The checks can be TCP checks where Route 53 tries to establish a TCP connection with the endpoint and this needs to be successful within 10 seconds.
You can have HTTP checks where Route 53 must be able to establish a TCP connection with the endpoint within 4 seconds and in addition the endpoint must respond with a HTTP status code in the 200 range or 300 range within 2 seconds after connecting.
And this is more accurate for web applications than a simple TCP check.
And finally with HTTP and HTTPS checks you can also perform string matching.
Route 53 must be able to establish a TCP connection with the endpoint within 4 seconds and the endpoint must respond with a HTTP status code in the 200 or 300 range within 2 seconds and Route 53 health checker when it receives the status code it must also receive the response body from the endpoint within the next 2 seconds.
Route 53 searches the response body for the string that you specify.
The string must appear entirely in the first 5,120 bytes of the response body or the endpoint fails the health check.
This is the most accurate because not only do you check that the application is responding using HTTP or HTTPS but you can also check the content of that response versus what the application should do in normal circumstances.
Based on these health checks an endpoint is either healthy or unhealthy.
It moves between those states based on its health based on the checks conducted.
Now lastly the checks themselves can be one of 3 types.
You can have endpoint checks and these are checks which assess the health of an actual endpoint that you specify.
You can use cloud watch alarm checks which react to cloud watch alarms which can be configured separately and can involve some detailed in OS or in app tests if you use the cloud watch agent which we cover elsewhere in the course.
Finally checks can be what's known as calculated checks so checks of other checks.
So you can create health checks which check application wide health with lots of individual components.
Now you're going to get the opportunity to actually implement a health check in a demo lesson which is coming up very shortly in this section of the course.
But what I want to do before that is to just give you an overview of exactly how the console looks when you're creating a health check.
So let's move across to the console.
Okay so we're at the AWS console logged in to the general account in the northern Virginia region.
So to create a health check we need to move to the route 53 console so I'm going to go ahead and do that.
Remember how earlier in the theory component of this lesson I mentioned how health checks are created externally from records.
So rather than going into a hosted zone selecting a record and configuring a health check there to create a health check we go to the menu on the left and click on health checks.
Then we'll click on create health check and this is where we enter the information required to create the health check.
First we need to give it a name so let's just say that we use the example of test health check.
I mentioned that there are three different types of health checks.
We've got an endpoint health check and this checks the health of the particular endpoint.
We can use status of other health checks so this is a calculated health check and as I mentioned this allows you to create a health check which monitors the application as a whole and involves the health status of individual application components and then finally we can use the status of a cloud watch alarm to form the basis of this health check.
If we select endpoint for now then you're able to pick either IP address or domain name.
So you can specify the domain name of an application endpoint or you can use IP address.
If you pick domain name then what this configures is that all of the Route 53 health checkers will resolve this domain name first and then perform a health check on the resulting IP address.
Now in either case you've got the option of either picking TCP which does a simple TCP check in which case you need to specify either the IP version 4 or IP version 6 address together with a port number.
If you choose to use the more extensive HTTP or HTTPS health check then you're asked to specify the same IP address and port number so that will be used to establish the TCP connection.
You can also specify the host name and if you specify that it will pass this value to the endpoint as a host header so if you've got lots of different virtual hosts configured then this is how you can specify a particular host that the website should deliver.
You're also able because this is HTTP you can specify a path to use for this health check.
You can either specify the route path or you can specify a particular path to check.
If you change this to HTTPS then all of this information is the same only this time it will use secure HTTP rather than normal HTTP.
Now if we scroll down and expand advanced configuration it's here where you can select the request interval so the default is every 30 seconds or you can specify fast and have the checks occur every 10 seconds.
Now this is a check every 10 seconds from every health checker involved within this health check so the actual frequency of the health checks occurring on the endpoint will be much more frequent.
This is one check every 10 seconds from every health checker.
You can specify the failure threshold so this is the number of consecutive health checks that an endpoint must pass or fail for out 53 to change the current status.
So if you want to allocate a buffer and allow for the opportunity of the odd fail check not to influence the health state then you can specify a suitable value in this box.
It's here where you can specify a simple check so HTTP or HTTPS or you can elect to use string matching to do more rich checks of application health.
So if you know that your application should deliver a certain string and the request body then you can specify that here.
Now you can also configure a number of advanced options one of them is latency graph so you can show the latency of the checks against this endpoint.
You can invert the health check status so if the health check of an application is unhealthy you can invert it to healthy and vice versa.
So this is a fairly situational option that I haven't found much use for.
You also have the option of disabling the health check this might be useful if you're performing temporary maintenance on an application and if you check this box then even if the application endpoint reports as unhealthy it's considered healthy.
You also get the option of specifying the health checker regions you can use the recommended suggestion and the health checkers will come from these locations or you can select customize and pick the particular regions that you want to use.
In most cases you would use the recommended options.
Now if we just go ahead and enter some sample values here so I'm going to use 1.1.1.1.
I'm going to leave the host name blank I'm going to set the port number to 80 and then I'll scroll down and just enter a search string again we're not going to create this or just enter a placeholder click on next and it's here where you can configure what happens when the health check fails.
Now this is completely optional we can use health checks within resource records only we don't have to configure any notification but if we do want to configure a notification then we can create an alarm and we can send this to either an existing or new s and s topic and this is a method of how we can integrate this with other systems so we can have other aws services configured to respond to notifications on this topic or we could integrate external systems so that when a health check fails external action is taken but this is what I wanted to show you I just wanted to give you an overview of how it looks creating a health check within the console UI now don't worry you're actually going to be doing this in a demo lesson which is coming up elsewhere in this section but I wanted to give you that initial exposure to how the console looks when creating a health check at this point let's go ahead and finish up the theory component of this lesson by returning to the architecture now you've seen how a health check is created architecturally health checks look something like this let's assume that somewhere near the UK we have an application Catergram and we point a route 53 record at this application so let's assume that this is Catergram dot IO what we can do is to associate a health check with this resource record and doing so means that our application will be health checked by a globally distributed set of health checkers so each of these health checkers performs a periodic check of our application based on this check they report the resource as healthy or unhealthy if more than 18 percent of the health checkers report as healthy then the health check overall is healthy otherwise it's reported as unhealthy and in most cases records which are unhealthy are not returned in response to queries now you're going to see throughout this section of the course and the wider course itself how health checks can be used to influence how DNS response to queries and how applications can react to component failure so route 53 is an essential design and operational tool that you can use to influence how resolution requests occur and how they're routed through to your various different application components and so understanding health checks is essential to be able to design route 53 infrastructure integrate this with your applications and then manage it day to day as an operational engineer so it's really important that you understand this topic end to end no matter which stream of the AWS certifications that you're currently studying for now that's everything that I wanted to cover in this video go ahead and complete the video and when you're ready I'll I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to talk about the second Route 53 routing policy that I'm going to be covering in this series of videos and that's fail over routing.
Now let's just jump in and get started straight away.
With fail over routing we start with a hosted zone and inside this hosted zone a www.record.
However with fail over routing we can add multiple records of the same name, a primary and a secondary.
Each of these records points at a resource and a common example is an out of band failure architecture where you have a primary application endpoint such as an EC2 instance and a backup or fail over resource using a different service such as an S3 bucket.
The key element to fail over routing is the inclusion of a health check.
The health check generally occurs on the primary record.
If the primary record is healthy then any queries to www in this case resolve to the value of the primary record which is the EC2 instance running category in this example.
If the primary record fails its health check then the secondary value of the same name is returned in this case the S3 bucket.
The use case for fail over routing is simple.
Use it when you need to configure active passive fail over where you want to route traffic to a resource when that resource is healthy or to a different resource when the original resource is failing its health check.
Now this is a fairly simple concept that you'll be experiencing yourself in a demo video which is coming up very soon but at this point that's everything that I wanted to cover in this video.
So go ahead complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to cover the first of a range of routing policies available within Route 53.
We're going to start with the default and as the name suggests it's the simplest.
This video is going to be pretty quick so let's jump in and get started straight away.
Simple routing starts with a hosted zone.
Let's assume it's a public hosted zone called animalsforlife.org.
With simple routing you can create one record per name.
In this example, WWW which is an A record type.
Each record using simple routing can have multiple values which are part of that same record.
When a client makes a request to resolve WWW and simple routing is used all of the values are returned in the same query in a random order.
The client chooses one of the values and then connects to that server based on the value in this case 1.2.3.4.
Simple routing is simple and you should use it when you want to route requests towards one single service.
In this example a web server.
The limitation of simple routing is that it doesn't support health checks and I'll be covering what health checks are in the next video.
But just remember with simple routing there are no checks that the resource being pointed at by the record is actually operational and that's important to understand because all of the other routing types within Route 53 offer some form of health checking and routing intelligence based on those health checks.
Simple routing is the one type of routing policy which doesn't support health checks and so it is fairly limited but it is simple to implement and manage.
So that's simple routing again as the name suggests it's simple it's not all that flexible and it doesn't really offer any exciting features but don't worry I'll be covering some advanced routing types over the coming videos.
For now just go ahead and complete this video and then when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to talk about the other type of hosted zone available within Route 53 and that's private hosted zone.
So let's jump in and get started straight away.
A private hosted zone is just like a public hosted zone in terms of how it operates only it's not public.
Instead of being public it's associated with VPCs within AWS and it's only accessible within VPCs that it's associated with.
You can associate a private hosted zone with VPCs in your account using the console UI, CLI and API and even in different accounts if you use the CLI and API only.
Everything else is the same you can use them to create resource records and these are resolvable within VPCs.
It's even possible to use a technique called split view or split horizon DNS which is where you have public and private hosted zones of the same name meaning that you can have a different variant of a zone for private users versus public.
You might do this if you want your company intranet to run on the same address as your website and have your users be presented with your intranet when internal but the public website when anyone accesses from outside of your corporate network or if you wanted certain systems to be accessible via your business's DNS but only within your environment.
Now let's quickly step through how private hosted zones work visually so that you have more of an idea of the end-to-end architecture.
So we start with a private hosted zone and as with public zones we can create records within this zone.
Now from the public internet our users can do normal DNS queries so for things like Netflix.com and Categor.io but the private hosted zone is inaccessible from the public internet.
It can be made accessible though from VPCs.
Let's assume all three of these VPCs have services inside them and use the Route 53 resolver so the VPC +2 address.
Well any VPCs which we associate with the private hosted zone will be able to access that zone via the resolver.
Any VPCs which aren't associated will face the same problem as the user on the public internet on the left.
Access isn't available so private hosted zones are great when you need to provide records using DNS but maybe they're sensitive and need to be accessible only from internal VPCs.
Just remember to be able to access a private hosted zone the service needs to be running inside a VPC and that VPC needs to be associated with the private hosted zone.
Now before I finish up this short lesson let's talk about split view or split horizon DNS.
Consider this scenario you have a VPC running an Amazon workspace and to support some business applications a private hosted zone with some records inside it.
The private hosted zone is associated with VPC 1 on the right meaning the workspace could use the Route 53 resolver to access the private hosted zone.
For example to access the accounting records stored within the private hosted zone.
Now the private hosted zone is not accessible from the public internet but what split view allows us to do is to create a public hosted zone with the same name.
This public hosted zone might only have a subset of records that the private hosted zone has so from the public internet access to the public hosted zone would work in the same way as you would expect so via the ISP resolver server then through to the DNS root servers from there to the .org TLD servers and from there through to the animals for life name servers provided by Route 53.
Any records inside the public hosted zone would be accessible but records in the private hosted zone which are not in the public hosted zone so accounting in this example would be inaccessible from the public internet and this is a common architecture where you want to use the same domain name for public access and internal access but with a different set of records available to each.
It's something that you'll need to be comfortable with as an architect designing solutions, a developer integrating DNS into your applications or an engineer implementing this within AWS.
Now that's everything I want to cover on the theory of private hosted zone so go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this video, I want to talk about Route 53 public hosted zones.
There are two types of DNS zones in Route 53, public and private.
To start with, let's cover off some general facts and then we can talk specifically about public hosted zones.
A hosted zone is a DNS database for a given section of the global DNS database, specifically for a domain such as AnimalsForLife.org.
Route 53 is a globally resilient service.
These name servers are distributed globally and have the same dataset, so whole regions can be affected by outages and Route 53 will still function.
Hosted zones are created automatically when you register a domain using Route 53 and you saw that earlier in the course when I registered the AnimalsForLife.org domain.
They can also be created separately if you want to register a domain elsewhere and use Route 53 to host it.
There's a monthly fee to host each hosted zone and a fee for the query is made against that hosted zone.
A zone where the public or private hosts DNS records.
Examples of these being A records or the IP version 6 equivalent, MX records, NS records and text records and I've covered these at an introductory level earlier in the course.
In summary, hosted zones are databases which are referenced via delegation using name server records.
A hosted zone when referenced in this way is authoritative for a domain such as AnimalsForLife.org.
When you register a domain, name server records for that domain are entered into the top level domain zone.
These point at your name servers and then your name servers and the zone that they host become authoritative for that domain.
A public hosted zone is a DNS database, so a zone file which is hosted by Route 53 on public name servers.
This means it's accessible from the public internet and within VPCs using the Route 53 resolver.
Architecturally, when you create a public hosted zone, Route 53 allocates four public name servers.
It's on those name servers that the zone file is hosted.
To integrate it with the public DNS system, you change the name server records for that domain to point at those four Route 53 name servers.
Inside a public hosted zone, you create resource records which are the actual items of data which DNS uses.
You can, and I'll cover this in an upcoming video, use Route 53 to host zone files for externally registered domains.
So for example, you can use hover or goad adi to register a domain.
You can create the public hosted zone in Route 53, get the four name servers which are allocated to that hosted zone, and then via the hover or goad adi interface, you can add those name servers into the DNS system for your domain.
And I'll cover how this works in detail in a future video.
Visually, this is how a public hosted zone looks and functions.
We start by creating a public hosted zone, and for this example, it's animalsforlife.org.
Creating this allocates four Route 53 name servers for this zone, and those name servers are all accessible from the public internet.
They're also accessible from AWS VPCs using the Route 53 resolver, which assuming DNS is enabled for the VPC is directly accessible from an internal IP address of that VPC.
Inside this hosted zone, we can create some resource records, in this case, a www.wrecord, two MX records for email, and a text record.
Within the VPC, the access method is direct, the VPC resolver using the VPC plus two address, and this is accessible from any instances inside the VPC, which use this as their DNS resolver.
So they can query the hosted zone as they can any public DNS zone using the Route 53 resolver.
From a public DNS perspective, the architecture is the same in that the same zone file is used, but the mechanics are slightly different.
DNS starts with the DNS Route servers, and these are the first servers queried by our users resolver server.
So Bob is using a laptop talking to his ISP DNS resolver server, which queries the Route servers.
The Route servers have information on the .org top level domain, and so the ISP resolver server can then query the .org servers.
These servers host the .org zone file, and this zone file has an entry for AnimalsForLife.org, which has four name servers and these all point at the Route 53 public name servers for the public hosted zone for Animals For Life.
This process is called "walking the tree", and this is how any public internet host can access the records inside a public hosted zone using DNS.
And that's how public hosted zones work.
They're just a zone file which is hosted on four name servers provided by Route 53.
This public hosted zone can be accessed from the public internet or any VPCs which are configured to allow DNS resolution.
There's a monthly cost for hosting this public hosted zone and a tiny charge for any queries made against it.
Almost nothing in the grand scheme of things, but for larger volume sites, it's something to keep in mind.
So that's public hosted zones, that's everything I wanted to cover in this video on the theory side of things.
So go ahead and complete this video and then when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover two important EC2 optimisation topics, Enhanced Networking and EBS Optimised Instances.
Both of these are important on their own, both provide massive benefits to the way EC2 performs and they support other performance features within EC2 such as placement groups.
As a solutions architect understanding their architecture and benefits is essential.
So let's get started.
Now let's start with Enhanced Networking.
Enhanced Networking is a feature which is designed to improve the overall performance of EC2 networking.
It's a feature which is required for any high-end performance features such as cluster placement groups.
Enhanced Networking uses a technique called SRIOV or Single Route IO Virtualisation.
And I've mentioned this earlier in the course.
At a high level it makes it so that a physical network interface inside an EC2 host is aware of virtualisation.
Without Enhanced Networking this is how networking looks on an EC2 host architecturally.
In this example we have two EC2 instances, each of them using one virtual network interface.
And both of these virtual network interfaces talk back to the EC2 host and each of them use the host's single physical network interface.
The crucial thing to understand here is that the physical network interface card isn't aware of virtualisation.
And so the host has to sit in the middle controlling which instance has access to the physical card at one time.
It's a process taking place in software so it's slower and it consumes a lot of host CPU.
When the host is under heavy load so CPU or IO it can cause drops in performance, spikes in latency and changes in bandwidth.
It's not an efficient system.
Enhanced Networking or SRIOV changes things.
Using this model the host has network interface cards which are aware of virtualisation.
Instead of presenting themselves as single physical network interface cards which the host needs to manage, it offers what you can think of as logical cards, multiple logical cards per physical card.
Each instance is given exclusive access to one of these logical cards and it sends data to this the same as it would do if it did have its own dedicated physical card.
The physical network interface card handles this process end to end without consuming mass amounts of host CPU.
And this means a few things which matter to us as solutions architects.
First in general it allows for higher IO across all instances on the host and lower host CPU as a result because the host CPU doesn't have the same level of involvement as when no enhanced networking is used.
What this translates into directly is more bandwidth.
It allows for much faster networking speeds because it can scale and it doesn't impact the host CPU.
Also because the process occurs directly between the virtual interface that the instance has and the logical interface that the physical card offers, you can achieve higher packets per second or PPS.
And this is great for applications which rely on networking performance, specifically those which need to shift lots of small packets around the small isolated network.
And lastly because the host CPU isn't really involved because it's offloaded to the physical network interface card, you get low latency and perhaps more importantly consistent low latency.
Enhanced networking is a feature which is either enabled by default or available for no charge on most modern EC2 instance types.
There's a lot of detail in making sure that you have it enabled, but for the solutions architect stream none of that is important.
As always though, I'll include some links attached to the lesson if you do want to know how to implement it operationally.
Okay, so that's enhanced networking.
Let's move on to EBS optimized instances.
Whether an instance is EBS optimized or not depends on an option that's set on a per instance basis.
It's either on or it's off.
To understand what it does, it's useful to appreciate the context.
What we know already is that EBS is block storage for EC2, which is delivered over the network.
Historically, networking on EC2 instances was actually shared with the same network stack being used for both data networking and EBS storage networking.
And this resulted in contention and limited performance for both types of networking.
Simply put, an instance being EBS optimized means that some stack optimizations have taken place and dedicated capacity has been provided for that instance for EBS as usage.
It means that faster speeds are possible with EBS and the storage side of things doesn't impact the data performance and vice versa.
Now, on most instances that you'll use at this point in time, it's supported and enabled by default at no extra charge.
Disabling it has no effect because the hardware now comes with the capability built in.
On some older instances, it's supported but enabling it costs extra.
EBS optimization is something that's required on instance types and sizes which offer higher levels of performance.
So things which offer high levels of throughput and IOPS, especially when using the GP2 and IO1 volume types, which promise low and consistent latency as well as high input output operations per second.
So that's EBS optimization.
It's nothing complex.
It essentially just means adding dedicated capacity for storage networking to an EC2 instance.
And at this point in time, it's generally enabled and comes with all modern types of instances.
So it's something you don't have to worry about, but you do need to know that it exists.
Now, that's the theory that I wanted to cover.
I wanted to keep it brief.
There's a lot more involved in using both of these and understanding the effects that they can have.
But this is an architecture lesson for this stream.
You just need to know that both features exist and what they enable you to do what features they provide at a high level.
So thanks for watching.
Go ahead, finish this video. video and when you're ready you can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson I want to cover EC2 dedicated hosts, a feature of EC2 which allows you to gain access to hosts dedicated for your use which you can then use to run EC2 instances.
Now I want to keep it brief because for the exam you just need to know that the feature exists and it tends to have a fairly narrow use case in the real world.
So let's just cover the really high level points and exactly how it works architecturally.
So let's jump in and get started.
An EC2 dedicated host as the name suggests is an EC2 host which is allocated to you in its entirety.
So allocated to your AWS account for you to use.
You pay for the host itself which is designed for a specific family of instances.
For example A1, C5, M5 and so on.
Because you're paying for the host there are no charges for any instances which are running on the host.
The host has a capacity and you're paying for that capacity in its entirety so you don't pay for instances running within that capacity.
Now you can pay for a host in a number of ways either on demand which is good for short term or uncertain requirements or once you understand long term requirements and patterns of usage you can purchase reservations with the same one or three year terms as the instances themselves.
And this uses the same payment method architecture so all upfront, partial upfront or no upfront.
The host hardware itself comes with a certain number of physical sockets and cores and this is important for two reasons.
Number one it dictates how many instances can be run on that host.
And number two software which is licensed based on physical sockets or cores can utilize this visibility of the hardware.
Some enterprise software is licensed based on the number of physical sockets or cores in the server.
Imagine if you're running some software on a small EC2 instance but you have to pay for the software licensing based on the total hardware in the host that that instance runs on.
Even though you can't use any of that extra hardware without paying for more instance fees.
With dedicated hosts you pay for the entire host so you can license based on that host which is available and dedicated to you.
And then you can use instances on that host free of charge after you've paid the dedicated host fees.
So the important thing to realize is you pay for the host.
Once you've paid for that host you don't have any extra EC2 instance charges.
You're covered for the consumption of the capacity on that host.
Now the default way that dedicated hosts work is that the hosts are designed for a specific family and size of instance.
So for example an A1 dedicated host comes with one socket and 16 cores.
All but a few types of dedicated hosts are designed to operate with one specific size at a time.
So you can get an A1 host which can run 16 A1 medium instances or 8 large or 4 extra large or 2 extra large or 1 4 extra large.
All of these options consume the 16 cores available.
And all but a few types of dedicated hosts require you to set that in advance.
So they require you to set that one particular host can only run 8 large instances or 4 extra large or 2 extra large and you can't mix and match.
Newer types of dedicated hosts, so those running the Nitro virtualization platform, they offer more flexibility.
An example of this is an R5 dedicated host which offers 2 sockets and 48 cores.
Because this is Nitro based, you can use different sizes of instances at the same time up to your core limit of that dedicated host.
So one host might be running 1 12 extra large, 1 4 extra large and 4 2 extra large which consumes 48 cores of that dedicated host.
Another host might use a different configuration, maybe 4 4 extra large and 4 2 extra large which also consumes 48 cores.
With Nitro based dedicated hosts, there's a lot more flexibility allowing a business to maximize the value of that host, especially if they have varying requirements for different sizes of instances.
Now this is a great link which I've included in the lesson text which details the different dedicated host options available.
So you've got different dedicated hosts for different families of instance, for example the A1 instance family.
This offers 1 physical socket and 16 physical cores and offers different configurations for different sizes of instances.
Now if you scroll all the way down, it also gives an overview of some of the Nitro based dedicated hosts which support this mix and match capability.
So we've got the R5 dedicated host that I just talked about on the previous screen.
We've also got the C5 dedicated host and this gives 2 example scenarios.
In scenario 1 you've got 1 instance of a C5 9 extra large, 2 instances of C5 4 extra large and 1 instance of C5 extra large.
And that's a total cores consumed of 36.
There's also another scenario though where you've got 4 times 4 extra large, 1 times extra large and 2 times large.
Same core consumption but a different configuration of instances.
And again, I'll make sure this is included in the lesson description.
It also gives the on-demand pricing for all of the different types of dedicated host.
Now there are some limitations that you do need to keep in mind for dedicated host.
The first one is AMI limits.
You can't use REL, Seuss Linux or Windows AMIs with dedicated host.
They are simply not supported.
You cannot use Amazon RDS instances.
Again, they're not supported.
You can't utilize placement groups.
They're not supported on dedicated hosts.
And there's a lesson in this section which talks in depth about placement groups.
But in this context, as it relates to dedicated hosts, you cannot use placement groups with dedicated hosts.
It's not supported.
Now with dedicated hosts, they can be shared with other accounts inside your organization using the RAM product, which is the resource access manager.
It's a way that you can share certain AWS products and services between accounts.
We haven't covered it yet, but we will do later in the course.
You're able to share a dedicated host with other accounts in your organization.
And other AWS accounts in your organization can then create instances on that host.
Those other accounts which have a dedicated host shared into them can only see instances that they create on that dedicated host.
They can't see any other instances.
And you, as the person who owns the dedicated host, you can see all of the instances running on that host.
But you can't control any of the instances running on your host created by any accounts you share that host with.
So there is a separation.
You can see all of the instances on your host.
You can only control the ones that you create.
And then other accounts who get that host shared with them, they can only see instances that they create.
So there's a nice security and visibility separation.
Now that's all of the theory that I wanted to cover around the topic of dedicated hosts.
You don't need to know anything else for the exam.
And if you do utilize dedicated hosts for any production usage in the real world, it is generally going to be around software licensing.
Generally using dedicated hosts, there are restrictions.
Obviously they are specific to a family of instance.
So it gives you less customizability.
It gives you less flexibility on sizing.
And you generally do it if you've got licensing issues that you need solved by this product.
In most cases, in most situations, it's not the approach you would take if you just want to run EC2 instances.
But with that being said, go ahead, complete this video.
And when you're ready, I'll look forward to you joining me in the next one.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about an important feature of EC2 known as placement groups.
Normally when you launch an EC2 instance its physical location is selected by AWS placing it on whatever EC2 host makes the most sense within the availability zone that it's launched in.
Placement groups allow you to influence placement ensuring that instances are either physically close together or not.
As a Solutions Architect understanding how placement groups work and why you would use them is essential so let's jump in and get started.
There are currently three types of placement groups for EC2.
All of them influence how instances are arranged on physical hardware but each of them do it for different underlying reasons.
At a high level we have cluster placement groups and these are designed to ensure that any instances in a single cluster placement group are physically close together.
We've got spread placement groups which are the inverse ensuring that instances are all using different underlying hardware and then we've got partition placement groups and these are designed for distributed and replicated applications which have infrastructure awareness.
So where you want groups of instances but where each group is on different hardware.
So I'm going to cover each of them in detail in this lesson once we talk about each of them they'll all make sense.
Now cluster and spread tend to be pretty easy to understand.
Partition is less obvious if you haven't used the type of application which they support but it will be clear once I've explained it and once you've finished with this lesson.
Now let's start with cluster placement groups.
Cluster placement groups are used when you want to achieve the absolute highest level of performance possible within EC2.
With cluster placement groups you create the group and best practice is that you launch all of the instances which will be in the group all at the same time.
This ensures that AWS allocate capacity for everything that you require.
So for example if you launch with nine instances imagine that AWS place you in a location with the capacity for 12.
If you want to double the number of instances you might have issues.
Best practice is to use the same type of instance as well as launching them all at the same time because then AWS will place all of them in a suitable location with capacity for everything that you need.
Now cluster placement groups because of their performance focus have to be launched into a single availability zone.
Now how this works is that when you create the placement group you don't specify an availability zone.
Instead when you launch the first instance or instances into that placement group it will lock that placement group to whichever availability zone that instance is also launched into.
The idea with cluster placement groups is that all of the instances within the same cluster placement group generally use the same rack but often the same EC2 host.
All of the instances within a placement group have fast direct bandwidth to all other instances inside the same placement group.
And when transferring data between instances within that cluster placement group they can achieve single stream transfer rates of 10 GB per second versus the usual 5 GB per second which is achievable normally.
Now this is single stream transfer rates while some instances do offer significantly faster networking you're always going to be limited to the speed that a single stream of data a single connection can achieve.
And inside a cluster placement group this is 10 GB per second versus 5 GB per second which is achievable normally.
Now the connections between these instances because of the physical placement they're the lowest latency possible and the maximum packets per second possible within AWS.
Now obviously to achieve these levels of performance you need to be using instances with high performance networking so IE more bandwidth than the 10 GB per second single stream and you should also use enhanced networking on all instances so definitely to achieve the low latency and max packets per second you do need also to use enhanced networking.
So cluster placement groups are used when you really need performance.
They're needed to achieve the highest levels of throughput and the lowest consistent latencies within AWS but the trade-off is because of the physical location if the hardware that they're running on fails logically it could take down all of the instances within that cluster placement group.
So cluster placement groups offer little to no resilience.
Now some key points which you need to be aware of for the exam you cannot span availability zones with cluster placement groups this is locked when launching the first instance.
You can span VPC peers but this does significantly impact performance in a negative way.
Cluster placement groups are not supported on every type of instance it requires a supported instance type and generally you should use the same type of instance to get the best results though this is not mandatory and you should also launch them at the same time.
Also this is not mandatory but it is very recommended and ideally you should always launch all of the instances as the same type and at the same time when using cluster placement groups.
Now cluster placement groups offer 10 GB per second of single stream performance and the type of use cases where you would use them are any type of workloads which demand performance so fast speeds and low latency.
So this might be things like high performance compute or other scientific analysis which demand fast node-to-node speed and low consistent latency.
Now the next type of placement group I want to talk about is spread placement groups and these are designed to ensure the maximum amount of availability and resilience for an application.
So spread placement groups can span multiple availability zones in this case availability zone A and availability zone B.
Instances which are placed into a spread placement group are located on separate isolated infrastructure racks within each availability zone so each instance has its own isolated networking and power supply separate from any of the other instances also within that same spread placement group.
This means if a single rack fails either from a networking or power perspective the fault can be isolated to one of those racks.
Now with spread placement groups there is a limit to seven instances per availability zone because each instance is in a completely separate infrastructure rack and because there are limits on the number of these within each availability zone you do have that limit of seven instances per availability zone for spread placement groups.
Now the more availability zones in a region logically the more instances can be a part of each spread placement group but remember the seven instances per availability zone in that region.
Now again just some points that you should know for the exam spread placement groups provides infrastructure isolation so you're guaranteed that every instance launched into a spread placement group will be entirely separated from every other instance that's also in that spread placement group.
Each instance runs from a different rack each rack has its own network and power source and then just to stress again there is this hard limit of seven instances per availability zone.
Now with spread placement groups you can't use dedicated instances or hosts they're not supported and in terms of use cases spread placement groups are used when you have a small number of critical instances that need to be kept separated from each other so maybe mirrors of a file server or maybe different domain controllers within an organization anywhere where you've got a specific application and you need to ensure as high availability for each member of that application as possible where you want to create separate blast radiuses for each of the servers within that particular application and ensure that if one fails there is a smaller chance as possible that any of the other instances will fail.
You have to keep in mind these limits it's seven instances per availability zone but if you want to maximize the availability of your application this is the type of placement group to choose.
Now lastly we've got partition placement groups and these have a similar architecture to spread placement groups which is why they're often so difficult to understand fully and why it's often so difficult to pick between partition placement groups and spread placement groups.
Partition placement groups are designed for when you have infrastructure where you have more than seven instances per availability zone but you still need the ability to separate those instances into separate fault domains.
Now a partition placement group can be created across multiple availability zones in a region in this example az a and az b and when you're creating the partition placement group you specify a number of partitions with a maximum of seven per availability zone in that region.
Now each partition inside the placement group has its own racks with isolated power and networking and there is a guarantee of no sharing of infrastructure between those partitions.
Now so far this sounds like spread placement groups except with partition placement groups you can launch as many instances as you need into the group and you can either select the partition explicitly or have EC2 make that decision on your behalf.
With spread placement groups remember you had a maximum of seven instances per availability zone and you knew 100% that each instance within that spread placement group was separated from every other instance in terms of hardware.
With partition placement groups each partition is isolated but you get to control which partition to launch instances into.
If you launch 10 instances into one partition and it fails you lose all 10 instances.
If you launch seven instances and put one into each separate partition then it behaves very much like a spread placement group.
Now the key to understanding the difference is that partition placement groups are designed for huge scale parallel processing systems where you need to create groupings of instances and have them separated.
You as the designer of a system can have control over which instances are in the same and different partitions so you can design your own resilient architecture.
Partition placement groups offer visibility into the partitions.
You can see which instances are in which partitions and you can share this information with topology aware applications such as HDFS, HBase and Cassandra.
Now these applications use this information to make intelligent data replication decisions.
Imagine that you had an application which used 75 EC2 instances.
Each of those instances had its own storage and that application replicated data three times across that 75 instances.
So each piece of data was replicated on three instances and so essentially you had three replication groups each with 25 instances.
If you didn't have the ability to use partition placement groups then in theory all of those 75 instances could be in the same hardware and so you wouldn't have that resiliency.
With partition placement groups if the application is topology aware then it becomes possible to replicate data across different EC2 instances knowing that those instances are in separate partitions and so it allows more complex applications to achieve the same types of resilience as you get with spread placement groups.
Only that it has an awareness of that topology and it can cope with more than seven instances.
So the difference between spread and partition placement is that with spread placement it's all handled for you but you have that seven instance per availability zone limit but with partition placement groups you can have more instances but you or your application which is topology aware needs to administer the partition placement.
For larger scale applications that support this type of topology awareness this can significantly improve your resilience.
Now some key points for the exam around partition placement groups again seven partitions per availability zone instances can be placed into a specific partition or you can allow EC2 to automatically control that placement.
Partition placement groups are great for topology aware applications such as HDFS, HBase and Cassandra and partition placement groups can help a topology aware application to contain the impact of a failure to a specific part of that application.
So by the application and AWS working together using partition placement groups it becomes possible for large-scale systems to achieve significant levels of resilience and effective replication between different components of the application.
Now it's essential that you understand the difference between all three for the exam so make sure before moving on in the course you are entirely comfortable about the differences between spread placement groups and partition placement groups and then the different situations where you would choose to use cluster, spread and partition.
With that being said though that's everything I wanted to cover so go ahead and complete this lesson and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So let's go back to the instance now.
Just press enter a few times to make sure it hasn't timed out.
That's good.
Now there's a small bug fix that we need to do before we move on.
The CloudWatch agent expects a piece of system software to be installed called CollectD.
And on Amazon Linux that is not installed.
So we need to do two things.
The first is to create a directory that the agent expects to exist.
So run that command.
And the second is to create a database file that the agent also expects to exist.
And we can do that by running this command.
Now at this point we're ready to move on to the final step.
So we've installed the agent and we've run the configuration wizard to generate the agent configuration.
And we're now safely stored inside the parameter store.
The final step is to start up the CloudWatch agent and provide it with the configuration that's stored inside the parameter store.
And by doing that the agent can access the configuration.
It can download it.
It can configure itself as per that configuration.
And then because we've got an attached instance role that has the permissions required, it can also inject all of the logging data for the web server and the system into CloudWatch logs.
So the final step is to run this command.
So this essentially runs the Amazon hyphen CloudWatch hyphen agent hyphen CTL.
And it specifies a command line option to fetch the configuration.
And it uses hyphen C and specifies SSM colon and then the parameter store parameter name.
Essentially what this command does is to start up the agent, pull the config from the parameter store, make sure the agent is running and then it will start capturing that logging data and start injecting it into CloudWatch logs.
So at this point if it's functioning correctly, what you should be able to do is go back to the AWS console, go to services, type CloudWatch and then select CloudWatch to move to the CloudWatch console.
Then if we go to log groups, now you might see a lot of log groups here.
That's fine.
Every time you apply the animals for life VPC templates, it's actually using a Lambda function which we haven't covered yet to apply an IP version six workaround, which I'll explain later in the course when we cover Lambda.
What you should find though is if you just scroll down all the way to the bottom, you should see either one, two or three of the log groups that we've created.
In this example on screen now, you can see that I have /var/log/httbd/error_log.
Now these logs will start to appear when they start getting new entries and those entries are sent into CloudWatch.
So right now you can see that I only have the error log.
Now if you don't see access underscore log, what you can do is go back to the EC2 consoles, select the WordPress instance that you've created using the one click deployment and then copy the public IP version four address into your clipboard.
Don't use this link, just copy the IP address and then open that in a new tab.
Now by doing that, it will generate some activity within the Apache web server and that will put some log items into the access log and that will mean that that logging information will then be injected into CloudWatch logs using the CloudWatch agent.
So if we move back to CloudWatch logs and then refresh, scroll down to the bottom.
Now we can see the access underscore log file.
Open the log stream for the EC2 instance.
This log file details any accesses to the web server on the EC2 instance.
You won't have a lot of entries in this.
Likewise, if you go back to log groups and look for the error log, that will detail any errors, any accesses which weren't successfully served.
So if you try to access a web page which doesn't exist, if there's a server error or any module errors, these will show inside this log group.
Now also, because we're using the CloudWatch agent, we also have access to some metrics inside the EC2 instance that we otherwise would not have had.
If we click on metrics and just drag this up slightly so we can see it, you'll see the AWS namespaces.
So these are namespaces with metrics inside that you would have had access to before, but there'll also be the CWAgent namespace and inside here, just maybe select the image ID, instance ID, instance type name.
Inside there, you'll see all of the metrics that you now have access to because you have the CloudWatch agent installed on this EC2 instance.
So these are detailed operating system level metrics such as disk, IO, read and write, and you would have not had access to these before installing the agent.
If we select another one, image ID, instance ID, instance type CPU, we'll be able to see the CPU cores that are on this instance together with the IO weight and the user values.
Again, these are things that you would not have had access to at this level of detail without the CloudWatch agent being installed.
Now I do encourage you to explore all of the different metrics that you now have access to as well as to how the log groups and log streams look with this agent installed.
But this is the end of what I had planned for this demo lesson.
So as always, we want to clear up all of the infrastructure that we've created within this demo lesson.
So to do that, I want you to move back to the EC2 console, right click on this instance, go down to security, select modify IAM role, and then remove the CloudWatch role from this instance.
You'll need to confirm that by following the instructions.
So to detach that role, then click on services and move back to IAM, click on roles.
And I want you to remove the CloudWatch role that you created earlier in this demo.
So select it and then click delete role.
You'll need to confirm that deletion.
What we're not going to do is delete the parameter value that we've created.
So if we go to services and then move back to systems manager, go to parameter store, because this is a standard parameter.
This won't incur any charges.
And we're going to be using this later on in future lessons of this course and other courses.
So this is a standard configuration for the CloudWatch agent, which we'll be using elsewhere in the course.
So we're going to leave this in place.
The last piece of cleanup that you'll need to do is to go back to the CloudFormation console.
You should have the single CW agent stack in place that you created at the start of the demo using the one click deployment.
Go ahead and select the stack, click on delete, and then confirm that deletion.
And once that's completed, all of the infrastructure you've used in this demo will be removed and the account will be back in the same state as it was at the start of this demo.
Now that's everything that I wanted you to do in this demo.
I just wanted to give you a brief overview of how to manually install the CloudWatch agent within an EC2 instance.
Now there are other ways to perform this installation.
You can use systems manager or bake it into AMIs or you can bootstrap it in using the process that you've seen earlier in the course.
We're going to be using the CloudWatch agent during future demos of the course to get access to this rich metric and logging information.
So most of the demos which follow in the course will include the CloudWatch agent configuration.
At this point though, that is everything I wanted you to do in this demo.
Go ahead, complete this video, and when you're ready, I'll look forward to you joining me in the next. in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and welcome to this demo where together we'll be installing the CloudWatch agent to capture and inject logging data for three different log files into CloudWatch logs as well as giving us access to some metrics inside the OS that we wouldn't have otherwise had visibility of.
So it's going to be a really good demonstration to show you the power of CloudWatch and CloudWatch logs when combined with the CloudWatch agent.
Now in order to do this demo you're going to need to deploy some infrastructure.
To do so just make sure that you're logged in to the general AWS account, so the management account of the organization and as always make sure you've got the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment URL which will deploy the infrastructure that you'll be using during this demo.
So go ahead and click on that link.
This will take you to a quick create stack screen.
The stack name should be pre-populated with CW agent.
You just need to scroll all the way down to the bottom, acknowledge the capabilities and click on create stack.
Also attached to this lesson is a lesson commands document which will contain all of the commands you'll be using during this demo lesson.
So go ahead and open that in a new tab.
Now you're going to need to let this cloud formation stack move into a create complete state before you continue the demo.
So go ahead and pause the video, wait for the status to change to create complete and then you're good to continue.
Okay so now this stack is in a create complete state then we good to continue the demo.
Now during this demo lesson you're going to be installing the cloud watch agent on an EC2 instance and this EC2 instance has been provisioned by this one-click deployment.
So the first thing that we need to do is to move across to the EC2 console and connect to this instance.
Once you're at the EC2 console click on instances running.
You should see one single EC2 instance called A4L WordPress.
Just go ahead and select this, right-click on it, select connect.
We're going to connect into this instance using EC2 instance connect so make sure that's selected.
Make sure also that the username is set to EC2-user and then connect into the instance.
Now if everything's working as it should be you should see the animals for life custom login banner when you log into the instance.
In my case I do see that and that means everything's working as expected.
So this demonstration is going to have a number of steps.
First we need to download the cloud watch agent then we need to install the agent then we need to generate the configuration file that this install of the agent as well as any future installs of the agent could use and then we need to get the cloud watch agent to read this config and start capturing and injecting those logs into cloud watch logs.
So step one is to download and install the agent and the command to do this is inside the lesson commands document which is attached to this lesson and that will install the agent but crucially it won't start it.
What we need to do before we can start the agent is to generate the config file that we'll use to configure this and any future agents but because we also want to store that config file inside the parameter store and because we also want to give this instance permissions to interact with cloud watch logs before we continue we need to attach an IAM role to this instance an EC2 instance role so that's the next step.
So we need to move back to the EC2 console click on services and then open the IAM console because we'll be creating an IAM role to attach to this instance you'll need to go to roles create role it'll be an AWS service role using EC2 so select EC2 and then click on next we'll need to attach two managed policies to this role so I've included the names of those managed policies in the lesson commands document the first is cloud watch agent server policy so make sure you type that in the filters policy and then check that box and the second is Amazon SSM full access so type that in the box select that policy and then scroll down and click on next and we'll call this role cloud watch role so enter cloud watch role and click on create role and once we've done that we can attach this role to our EC2 instance so we need to move back to the EC2 console go to instances right click on the instance go to security and then modify IAM role and then click on the drop down and select cloud watch role which is the role that you've just created then click update IAM role now that we've allocated that instance with the permissions that it needs to perform the next set of steps go ahead and connect to that instance again the tab may have timed out that you previously had open so if it doesn't respond you need to close it down and reopen it if it does respond that's fine keep the existing tab once we back in the terminal for the instance we need to start the cloud watch agent configuration wizard and the command to do that is also in the lesson commands document attached to this lesson so go ahead and paste that in and press enter and that will start off the configuration wizard for the cloud watch agent now for most of these values we can accept the defaults but we need to be careful because there are a number of them that we can't so press enter and accept the default for the operating system it should automatically detect Linux press enter it should automatically detect that it's running on an EC2 instance press enter to use the root user again that should be the default press enter for stats D press enter for the stats D port press enter for the interval press enter for the aggregation interval press enter to monitor metrics from collect D again that's the default press enter to monitor host metrics so CPU and memory press enter to monitor CPU metrics per core press enter for the additional dimensions press enter to aggregate EC2 dimensions press enter for the default resolution so 60 seconds for the default metric config that you want the default will be basic go ahead and enter 3 for advanced this captures additional operating system metrics that we might actually want so use 3 for this value press enter to indicate that we satisfied with the above config next we'll move to the log configuration part of this wizard so press enter for the default of no we don't have an existing cloud watch log agent config to import press enter which is the default for yes we do want to monitor log files you'll be asked for the log file path to monitor so the first one that we want to monitor and again these are in the lesson commands document so the first log path is forward slash var forward slash log forward slash secure press enter you'll be asked for the log group name the default is just the log name itself so secure but we're going to enter the full path I always prefer using the full path for the log group names for any system logs so going to enter var log secure again you'll be asked for the log stream name remembering the theory part of this lesson I talked about how a log stream will be named after the instance which is injecting those logs so the default choice is to do that to use the instance ID so press enter it's here where you can specify a log group retention in days we're just going to accept the default for the log group retention value the default will be yes we do want to specify additional log files so press enter the log file path for this one will be var log HTTP d access underscore log so enter that the log group name again will default to the name of the actual log we want the full path so enter the full path again press enter the log stream name the default for this is again the instance ID which is fine just press enter go ahead and accept the default for the log group retention in days press enter again we've got one more log file that we want to enter this time the log file path is var log HTTP d error underscore log again the log group name will default the name of the actual log we want to use the full path so enter the same thing again the default choice for log stream name will be again the instance ID that's fine press enter go ahead and accept the default for the log group retention in days and now we finished adding log files we won't want to log any additional files so press 2 and that will complete this logging section of this wizard it's asking us to confirm that we're happy with this configuration file and it's telling us that the configuration file is stored at forward slash opt forward slash aws forward slash amazon hyphen cloud watch hyphen agent forward slash bin and then config dot json in that folder now that's where it stores it on the local file system but we can also elect to store this json configuration inside the parameter store and I thought that since we've previously talked about the theory of the parameter store and done a little bit of interaction it would be useful for you to see exactly how it can be used in a more production like setting so the default is to store the configuration in the parameter store so we're going to allow that press enter it'll ask us for the parameter name to use and the default is Amazon cloud watch hyphen linux and that's fine so press enter it'll ask us for the region to use because parameter store is like many other services a regional service and the default region is the one where the instance is in so it automatically detects that we're in us east one which is northern Virginia so go ahead and accept that default choice it'll ask us for the credentials that it can use to send that configuration into the parameter store now these credentials will be obtained from the role that we've attached to this instance in the previous step so you can accept the default choice it'll use those credentials to store that configuration inside the parameter store and if we move back to the ec2 console and we switch back to the parameter store so just type SSM to move to systems manager which is the parent product of parameter store if we go down to the parameter store item on the menu on the left we'll be able to see this single parameter Amazon cloud watch hyphen linux and if we open that up and just scroll down we can see that the value is a JSON document with the full configuration of the cloud watch agent so we can now use this parameter to configure the agent on this ec2 instance as well as any other ec2 instances we want to deploy so if you create the cloud watch configuration once and then store it into parameter store then when you create ec2 instances at scale as you'll see how to do later in the course when we talk about auto scaling groups then you can use the parameter store to deploy this type of configuration at scale in a secure way okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part 2.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
So far in the course you've had a brief exposure to CloudWatch and CloudWatch logs and you know that CloudWatch monitors certain performance and reliability aspects of EC2 but crucially only those metrics that are available on the external face of an EC2 instance.
There are situations when you need to enable monitoring inside an instance so have access to certain performance counters of the operating system itself.
So be able to look at the processes running on an instance, the memory consumption of those processes, so have access to certain operating system level performance metrics that you cannot see outside the instance.
You also might want to allow access to system and application logging from within the EC2 instance.
So application logs and system logs also from within the operating system of an EC2 instance.
So in this lesson I want to step through exactly how this works and what you need to use to achieve it.
So let's get started.
Now a quick summary of where we're at so far in the course relevant to this topic.
So I just mentioned you know now that CloudWatch is the product responsible for storing and managing metrics within AWS and you also know that CloudWatch logs is a subset of that product aimed at storing, managing and visualizing any logging data but neither of those products can natively capture any data or any logs that's happening inside of an EC2 instance.
The products aren't capable of getting visibility inside of an EC2 instance natively.
The inside of an instance is opaque to CloudWatch and CloudWatch logs by default.
To provide this visibility the CloudWatch agent is required and this is a piece of software which runs inside an EC2 instance.
So running on the operating system it captures OS visible data and sends it into CloudWatch or CloudWatch logs so that you can then use it and visualize it within the console of both of those products.
And logically for the CloudWatch agent to function it needs to have the configuration and permissions to be able to send that data into both of those products.
So in summary in order for CloudWatch and CloudWatch logs to have access inside of an EC2 instance then there's some configuration and security work required in addition to having to install the CloudWatch agent and that's what I want to cover over the remainder of this lesson and the upcoming demo lesson.
Architecturally the CloudWatch agent is pretty simple to understand.
We've got an EC2 instance on its own.
For example the animals for life WordPress instance from the previous demos.
It's incapable of injecting any logging into CloudWatch logs without the agent being installed.
So to fix that we need to install the CloudWatch agent within the EC2 instance and the agent will need some configuration.
So it will need to know exactly what information to inject into CloudWatch and CloudWatch logs.
So we need to configure the agent.
We need to supply the configuration information so that the agent knows what to do.
The agent also needs some way of interacting with AWS, some permissions.
We know now that it's bad practice to add long-term credentials to an instance so we don't want to do that but that aside it's also difficult to manage that at scale.
So best practice for using this type of architecture is to create an IAM role with permissions to interact with CloudWatch logs and then we can attach this IAM role to the EC2 instance providing the instance or more specifically anything running on the instance with access to the CloudWatch and CloudWatch logs service.
Now the agent configuration that will also need to set up that configures the metrics and the logs that we want to capture and these are all injected into CloudWatch using log groups.
We'll configure one log group for every log file that we want to inject into the product and then within each log group there'll be a log stream for each instance performing this logging.
So that's the architecture.
One log group for each individual log that we want to capture and then one log stream inside that log group for every EC2 instance that's injecting that logging data.
Now to get this up and running for a single instance you can do it manually.
You can log into the instance, install the agent, configure it, attach a role and start injecting the data.
At scale you'll need to automate the process and potentially you can use cloud formation to include that agent configuration for every single instance that you provision.
Now CloudWatch agent comes with a number of ways to obtain and store the configuration that it will use to send this data into CloudWatch logs and one of those ways is we can actually use the parameter store and store the agent configuration as a parameter and because we've just learned about parameter store I thought it would be beneficial as well as demonstrating how to install and configure the CloudWatch agent.
We should also utilize the parameter store to store that configuration and so that's what we're going to do together in the next demo lesson.
We're going to install and configure the CloudWatch agent and set it up to collect logging information for three different log files.
We're going to set it up to collect and inject logging for forward slash var forward slash log forward slash secure which shows any events relating to secure log ins to the EC2 instance and we're also going to collect logging information for the access log and the error log which are both log files generated by the Apache web server that's installed on the EC2 instance and by using these three different log files it should give you some great practical experience to how to configure the CloudWatch agent and how to use the parameter store to store configuration at scale.
So that's it for the theory for now you can go ahead and finish off this video and then when you're ready you can join me in the next demo lesson where we'll be installing and configuring the CloudWatch agent.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson, I'm just wanting to give you some practical experience with interacting with the parameter store inside AWS.
So to do that, make sure you're logged into the IAM admin user of the management account of the organization and you'll need to have the Northern Virginia region selected.
Now there's also a lesson commands document linked to this lesson, which contain all of the commands that you'll need for this lessons demonstration.
So before we start interacting with the parameter store from the command line, we need to create some parameters.
And the way that we do that is first move to systems manager.
So the parameter store is actually a sub product of systems manager.
So move over to the systems manager console.
And once you're there, you'll need to select the parameter store from the menu on the left.
So it should be about halfway down on the left and it's under application management.
So go ahead and select parameter store.
Now once you're in parameter store, the first thing that you'll need to do to remove this default welcome screen logically is to create a parameter.
So go ahead and click on create parameter.
Now when you create a parameter, you're able to pick between standard or advanced.
Standard is the default and that meets most of the needs that most people have for the product.
And you can create up to 10,000 parameters using the standard tier.
With the advanced tier, you can create more than 10,000 parameters.
The parameter value can be longer at eight kilobytes versus the four kilobytes of standard.
And you do gain access to some additional features.
But in most cases, most parameters are fine using the default, which is the standard tier.
With the standard tier, there's no additional charge to use this up to the limit of 10,000 parameters.
The only point at which parameter store costs any extra is if you use the faster throughput options or make use of this advanced tier.
And we won't be doing that at any point throughout the course.
We'll only be using standard.
And so there won't be any extra parameter store related charges on your bill.
Now I mentioned that a parameter is essentially a parameter name and a parameter value.
And it's here where you set both of those.
There's an optional description that you can use and you can set the type of the parameter.
The options being string, string list, which is a comma separated list of individual strings and then secure string, which utilizes encryption.
So we're gonna go ahead at this point and create some parameters that we're then going to interact with from the command line.
So the first one we'll create is one that's called forward slash my-cat-app forward slash DB string.
So this is the name of a parameter and it will also establish a hierarchy.
So anytime we use forward slashers, we're establishing a hierarchy inside the parameter store.
So imagine this being a directory structure.
Imagine this being the root of the structure.
Imagine my-cat-app being the top level folder and inside there, imagine that we've got a file called DB string.
So we're going to store this, we're going to store this hierarchy and we need to set its value.
So we'll keep this for now as a string and this is going to be the database connection string for my-cat-app.
So we'll just enter the value that's in the lesson commands document.
So DB dot all the cats dot com colon 3306.
And 3306 of course is the my SQL standard port number.
At this point, we could enter an optional description.
So let's go ahead and do that.
Connection string for cat application.
So just type in a description here.
It doesn't matter really what you type and then scroll down and hit create parameter.
So that's created our first parameter, my-cat-app forward slash DB string.
Now we're going to go ahead and do the same thing but for DB user.
So click on create parameter and then just click in this name box and notice how it presents you with this hierarchy.
So now we've got two levels of this hierarchical structure.
We've got the my-cat-app at the top and then we've got the actual parameter that we created at the bottom here.
So this has already established this structure.
So let's go ahead and create a new parameter.
This time it's going to be forward slash my-cat-app forward slash DB user.
We'll not bother with the description for this one.
We'll keep it at the default of standard and it will also be a string.
And then for the value, it'll be boss cat.
So enter all that and click on create parameter.
Next let's create a parameter again.
If we click in this name this time, we've got this hierarchy that's ever expanding.
So we've got the top level at the top and then below it two additional parameters, DB string and DB user.
And we're going to create a third one at this level.
So this time it's going to be called forward slash my-cat-app forward slash DB password.
This time though, instead of type string, it's going to be a secure string so that it encrypts this parameter.
And it's going to use KMS to encrypt the parameter.
And because it's using KMS, we'll need to select the key to use to perform the cryptographic operations.
We can either select a key from the current account, so the account that we're in, or we can select another AWS account.
And in either case, we'll need to pick the key ID to use and by default, it uses the product default key for SSM.
So that's using alias forward slash AWS forward slash SSM.
And you always have the option of clicking on this dropdown and changing it if you want to benefit from the extra functionality that you get by using a customer managed KMS key.
This is an AWS managed one.
So you won't be able to configure rotation and you won't be able to set these advanced key policies.
But in most cases, you can use this default key.
So at this point, we'll leave it as the default and we'll enter our super secret password, amazing secret password, 1337, and then click create parameter.
We're not finished yet though, click on create parameter again.
And I like to be inclusive, so not everything in my course is going to be about cats.
We're going to create another parameter, my-dog-app forward slash DB string.
We'll keep standard, we'll keep the type as string and then the value for connecting to the my-dog application.
So the DB string is going to be DB if we really must have dogs.com colon 3306.
So type that in and then click on create parameter.
And then lastly, we're going to create one more parameter.
This time the name is going to be forward slash rate my lizard.
So rate hyphen my hyphen lizard forward slash DB string.
The tier is going to be standard again.
The type is going to be string.
And for the value, it will be DB.
This is pretty random.com colon 3306.
So type that in and then click on create parameter.
So now we've created a total of five parameters.
We've created the DB string, the DB user, and the DB password for the cat application.
And then the DB string for the dog application as well as the rate my lizard application.
So a total of five parameters and one of them is using encryption.
So that's the DB password for the my cat application.
So now let's switch over to the command line and interact with these parameters.
And to keep things simple, we're going to use the cloud shell.
So this is a relatively new feature made available by AWS.
And this means that we don't have to interact with AWS using our local machine.
We can do it directly from the AWS console.
So click on the cloud shell icon on the menu on the top.
This will take a few moments to provision because this is creating a dedicated environment for you to interact with AWS using the command line interface.
So you'll need to wait for this process to complete.
So go ahead and pause the video and wait until this logs you into the cloud shell environment at which point you can resume the video and we're good to continue.
It'll say preparing your terminal and then you'll see a familiar looking shell much like you would if you were connected to a Linux instance.
And now you'll be able to interact with AWS using the command line interface, using the credentials that you're currently logged in with.
Now to interact with parameter store using the command line, we start by using AWS and then a space, SSM, and then a space, and then the command that we're going to use is get-parameters.
Now by default, what we need to provide the get-parameters command with is the path to a parameter.
So in this case, if we wanted to retrieve the database connection string for the rate my lizard application, then we could provide it with this name.
So forward slash rate-my-lizard/dvstring.
And this directly maps back through to the parameter that we've just created inside the parameter store.
So this parameter.
So if you go ahead and type that and press enter, it's going to return a JSON object.
Inside that JSON object is going to be a list of parameters and then for each parameter, so everything inside these inner curly braces, we're going to see the name of the parameter that we wanted to retrieve, the type of the parameter, the value of the parameter.
In this case, db.this_is_pretty_random.com/3306, the version number of the parameter because we can have different version numbers, the last modified date, the data type, and then the unique arn of this specific parameter.
And so this is an effective way that you can store and retrieve configuration information from AWS.
Now we can also use the same structure of command to retrieve all of those other parameters that we stored within the parameter store.
So if we wanted to get the db string for the my-dog-app, then we could use this command.
And again, it would return the same data structure.
So a JSON object containing a list of parameters and each of those parameters would contain all of this information.
I'll clear the screen to keep this easy to see.
We could do the same for the my-cat-app, retrieving its database connection string.
And again, it would return the same JSON object with the parameters list.
And then for each parameter, this familiar data structure.
Now what you can also do, and I'm going to clear the screen before I run this, is instead of providing a specific path to a parameter.
So if you remember, we had almost a hierarchy that we created with these different names.
So we have the my-cat-app hierarchy and then inside there db-pastword, db-string, db-user.
We have my-dog-app and inside there db-string and then rate-my-lizard and also db-string.
So rather than having to retrieve each of these individual parameters by specifying the exact name, we can actually use get parameters by path.
So let's demonstrate exactly how that works.
So with this command, we're doing a get-parameters-by-path and we're specifying a path to a group of a number of parameters.
So in this case, my-cat-app is actually going to be the first part of the path of db-pastword, db-string and db-user.
So by creating a hierarchical structure inside the parameter store, we can retrieve multiple parameters at once.
So this time we're returning a JSON structure.
Inside this JSON structure, we have a list of parameters and then we're retrieving three different parameters, db-pastword, db-string and db-user.
Now note how db-pastword is actually a type of secure string and by default, if we don't specify anything, we return the encrypted version of this parameter.
So the ciphertext version of this parameter.
This ensures that we can interact with parameters without actually decrypting them and this offers several security advantages.
Now I've cleared the screen to make this next part easy to see because it's very important.
Because we're using KMS to encrypt parameters, the permissions to access KMS keys to perform this decryption, this is separate than the permissions to access the parameter store.
So if this user, I am admin in this case, has the necessary permissions to interact with KMS to use the keys to decrypt these parameters, then we can also ask the parameter store to perform that decryption whilst we retrieve the parameters.
The important thing to understand is the permissions to interact with the parameter store are separate than the permissions to interact with KMS.
So to perform a decryption whilst we're retrieving the parameters, we would use this command.
So it's the same command as before, aws, SSM, get-parameters-by-path, and then we're specifying the my-cat-app part of the hierarchy.
So remember, this represents these three parameters.
Now, if we ran just this part on its own, which was the command we previously ran, this would retrieve the parameters without performing decryption.
But by adding this last part, this is the part that performs the decryption on any parameter types which are encrypted.
And if you recall, one of the parameters that we created was this DB password, which is encrypted.
So if we run this command, this time it's going to retrieve the /my-cat-app/db password parameter, but it's going to decrypt it as part of that retrieval operation and return the plain text version of this parameter.
And just to reiterate, that requires both the permissions to interact with the parameter store, as well as the permissions to interact with the KMS key that we chose when creating this parameter.
Now, we're logged in as the IAM admin user, which has admin permissions, and so we do have permissions on both of those, on SSM and on KMS, so we can perform this decryption operation.
Now, you're going to be using the parameter store extensively for the rest of the course and my other courses.
It's a great way of providing configuration information to applications, both AWS and bespoke applications within the AWS platform.
It's a much better way to inject configuration information when you're automatically building applications or you need applications to retrieve their own configuration information.
It's much better to retrieve it from the parameter store than to pass it in using other methods.
So we're going to use it in various different lessons as we move throughout the course.
In this demo lesson, I just wanted to give you a brief experience of working with the product and the different types of parameters.
But at this point, let's go ahead and clear up all of the things that we've created inside this demo lesson.
So close down this tab, open to Cloud Shell.
Back at the parameter store console, just go ahead and check the box at the top to select all of these existing parameters.
If you do have any other parameters, apart from the ones that you've created within this demo lesson, then do make sure that you uncheck them.
You should be using an account dedicated for this training, so you shouldn't have any others at this point.
But if you do, make sure you uncheck them.
You should only be deleting ones for 8-mile lizard, my dog app, and my cat app.
So make sure that all of those are selected and then click on delete to delete those parameters, and you'll need to confirm that deletion process.
And at this point, that's everything that I wanted you to do in this lesson.
You've cleared up the account back to the point it was at the start of this demo lesson.
So go ahead, complete this video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to cover the Systems Manager parameter store, a service from AWS which makes it easy to store various bits of system configuration.
So strings, documents, and secrets, and store those in a resilient, secure, and scalable way.
So let's step through the products architecture, including how to best make use of it.
If you remember earlier in this section of the course, I mentioned that passing secrets into an EC2 instance using user data was bad practice because anyone with access to the instance could access all that data.
Well, parameter store is a way that this can be improved.
Parameter store lets you create parameters, and these have a parameter name and a parameter value, and the value is the part that stores the actual configuration.
Many AWS services integrate with the parameter store natively.
CloudFormation offers integrations that you've already used, which I'll explain in a second and in the upcoming demo lessons, and you can also use the CLI tooling on an EC2 instance to get access to the service.
So when I'm talking about configuration and secrets, parameter store offers the ability to store three different types of parameters.
We've got strings, string lists, and secure strings.
And using these three different types of parameters, you can store things inside the product, such as license codes, database connects and strings, so host names, ports.
You can even store full configs and passwords.
Now parameter store also allows you to store parameters using a hierarchical structure.
Parameter store also stores different versions of parameters.
So just like we've got object versioning in S3 inside parameter store, we can also have different versions of parameters.
Parameter store can also store plain text parameters, and this is suitable for things like DB connection strings or DB users, but we can also use cipher text, and this integrates with KMS to allow you to encrypt parameters.
So this is useful if you're storing passwords or other sensitive information that you want to keep secret.
So when you encrypt using cipher text, you use KMS, and that means you need permissions on KMS as well.
So there's that extra layer of security.
The parameter stores also got the concept of public parameters, so these are parameters publicly available and created by AWS.
You've used these earlier in the course.
An example is when you've used cloud formation to create EC2 instances, you haven't had to specify a particular AMI to use, because you've consulted a public parameter made available by AWS, which is the latest AMI ID for a particular operating system in that particular region, and I'll be demonstrating exactly how that works now in the upcoming demo lessons.
Now, the architecture of the parameter store is simple enough to understand.
It's a public service, so anything using it needs to either be an AWS service or have access to the AWS public space endpoints.
Different types of things can use the parameter store, so this might be things like applications, EC2 instances, all the things running on those instances, and even Lambda functions.
And they can all request access to parameters inside the parameter store.
As parameter store is an AWS service, it's tightly integrated with IAM for permissions, so any accesses will need to be authenticated and authorized, and that might use long-term credentials, so access keys or those credentials might be passed in via an IAM role.
And if parameters are encrypted, then KMS will be involved and the appropriate permissions to the CMK inside KMS will also be required.
Now, parameter store allows you to create simple or complex sets of parameters.
For example, you might have something simple like myDB password, which stores your database password in an encrypted form, but you can also create hierarchical structures.
So something like /wordpress/ and inside there, we might have something called dbUser, which could be accessed either by using its full name or requesting the WordPress part of the tree.
We could also have dbPassword, which again, because it's under the WordPress branch of the tree, could be accessed along with the dbUser by pulling the whole WordPress tree or accessed individually by using its full name, so /wordpress/dbPassword.
Now, we might also have applications which have their own part of the tree, for example, my-cat-app, or you might have functional division in your organization, so giving your dev team a branch of the tree to store their passwords.
Now, permissions are flexible and they can be set either on individual parameters or whole trees.
Everything supports versioning and any changes that occur to any parameters can also spawn events.
And these events can start off processes in other AWS services.
And I'll introduce this later.
I just want to mention it now so you understand that parameter store parameter changes can initiate events that occur in other AWS products.
Now, parameter store isn't a hugely complex product to understand.
And so at this point, I've covered all of the theory that you'll need for the associate level exam.
What I want to do now is to finish off this theory lesson.
And immediately following this is a demo where I want to show you how you can interact with the parameter store via the console UI and the AWS command line tools.
Now, it will be a relatively brief demo and so you're welcome to just watch me perform the steps.
Or of course, as always, you can follow along with your own environment and I'll be providing all the resources that you need to do that inside that demo lessons folder in the course GitHub repository.
So at this point, go ahead, finish this video.
And when you're ready, you can join me in the next demo lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this demo lesson where you're going to get the experience of working with EC2 and EC2 instance roles.
Now as you learned in the previous theory lesson, an instance role is essentially a specific type of IAM role designed so that it can be assumed by an EC2 instance.
When an instance assumes a role which happens automatically when the two of them are linked, that instance and any applications running on that instance can gain access to the temporary security credentials that that role provides.
And in this demo lesson you're going to get the experience of working through that process.
Now to get started you're going to need some infrastructure.
Make sure that you're logged in to the general AWS account, so that's the management account of the organization and as always you'll need to be within the Northern Virginia region.
Assuming you are, there's a one click deployment link which is attached to this lesson so go ahead and click that link.
That will take you to a quick create stack page.
The stack name will be pre-populated with IAM role demo and all you need to do is to scroll down to the bottom, check this capabilities box and then click on create stack.
This one click deployment will create the animals for live VPC and EC2 instance and an S3 bucket.
Now in order to continue with this demo we're going to need this stack to be in a create complete state.
So go ahead and pause the video and then when the stack moves into a create complete status then we're good to continue.
Okay so this stacks now in a create complete state and we're good to continue.
So to do so go ahead and click on the services drop down and then type EC2, locate it, right click and then open that in a new tab.
Once you're at the EC2 console click on instances running and you should be able to see that we only have the one single EC2 instance.
Now we're going to connect to this to perform all the tasks as part of this demo.
So right click on this instance, select connect, we're going to use EC2 instance connect.
Just verify that the username does say EC2-user and then click on connect.
Now the AMI that we use to launch this instance is just the standard Amazon Linux 2 AMI.
And so if we type AWS and press enter it comes with the standard install of the AWS CLI version 2.
Now it's important to understand that right now this instance has no attached instance role and it's not been configured in any way.
It's the native Amazon Linux 2 AMI that's been used to launch this instance.
And so if we attempt to interact with AWS using the command line utilities, for example by running an AWS S3 LS, the CLI tools will tell us that there are no credentials configured on this instance and will be prompted to provide long term credentials using AWS Configure.
Now this is the method that you've used to provide credentials to your own installed copy of the CLI tools running on your local machine.
So you've used AWS Configure and set up two named configuration profiles.
And the way that you provide these with authentication information is using access keys.
Now this instance has no access keys configured on it and so it has no method of interacting with AWS.
We could use AWS Configure and provide these credentials but that's not best practice for an EC2 instance.
What we're going to do instead is use an instance role.
So to do that you're going to need to move back to the AWS console.
And once you're there click on services and in the search box type IAM.
We're going to move to the IAM console so right click and open that in a new tab.
As I mentioned earlier an instance role is just a specific type of IAM role.
So we're going to go ahead and create an IAM role which our instance can assume.
So click on roles and then we're going to go ahead and click on create role.
Now the create role process presents us with a few common scenarios.
We can create a role that's used by an AWS service, another AWS account, a web identity or a role designed for SAML 2.0 Federation.
In our case we want a role which can be assumed by an AWS service specifically EC2.
So we'll select the type of trusted entity to be an AWS service then we'll click on EC2 and then we'll click on next.
Now for the permissions in this search box just go ahead and type S3 and we're looking for the Amazon S3 read only access.
So there's a managed policy that we're going to associate with this role.
So check the box next to Amazon S3 read only access and then we'll click on next.
And then under role name we're going to call this role A4L instance role.
So it's easy to distinguish from any other roles we have in the account.
So go ahead and enter that and click on create role.
Now as I mentioned in the theory lesson about instance roles when we do this from the user interface.
It's actually created a role and an instance profile of the same name and it's the instance profile that we're going to be attaching to the EC2 instance.
Now from a UI perspective both of these are the same thing.
You're not exposed to the role and the instance profile as separate entities but they do exist.
So now we're going to move back to the EC2 console and remember currently this instance has no attached instance role and we're unable to interact with AWS using this EC2 instance.
To attach an instance role using the console UI right click, go down to security and then modify IAM role.
Select that and we'll need to choose a new IAM role.
You have the option of creating one directly from this screen but we've already created the one that we want to apply.
So click in the drop down and select the role that you've just created.
In this case A4L instance role.
So select that and then click on save.
Now if we select the instance and then click on security you'll be able to confirm that it does have an IAM role attached to this instance.
So this is the instance role that this EC2 instance can now utilize.
So now we're going to interact with this instance again from the operating system.
Now if it's been a few minutes since you've last used instance connect you might find when you go back it appears to have frozen up.
If that's the case that's no problem just close down these tabs that you've got connected to that instance.
Right click on the instance again, select connect, make sure the username is EC2-user and then click on connect.
And this will reconnect you to that instance.
Now if you recall last time we were connected we attempted to run AWS S3LS and the command line tools informed us that we had no credentials configured.
Let's attempt that process again.
AWS space S3 space LS and press enter.
And now because we have the instance role associated with this EC2 instance the command line tools can use the temporary credentials that that role generates.
Now the way that this works and I'm going to demonstrate this using the curl utility these credentials are actually provided to the command line tools via the metadata.
So this is actually the metadata path that the command line tools use in order to get the security credentials.
So the temporary credentials that a role provides when it's assumed.
So if I use this command and press enter you'll see that it's actually using this role name.
So you'll see a list of any roles which are associated with this instance.
If we use the curl command again but this time on the end of security credentials we specify the name of the role that's attached to this instance and press enter.
Now we can see the credentials that command line tools are using.
So we have the access key ID the secret access key and the token and all of these have been generated by this EC2 instance assuming this role because these are temporary credentials.
They also have an expiry date.
So in my case here we can see that these credentials expire on the 7th of May 2022 at 552 47 UTC.
And that really is all I wanted to show you in this demo lesson about instance roles.
Essentially you just need to create an instance role and then attach it to an instance.
And once you do that instance is capable of assuming that role gaining access to temporary credentials and then any applications installed on that instance, including the command line utilities are capable of interacting with AWS using those credentials.
Now the process of renewing these credentials is automatic.
So as long as the application that's running on the instance periodically checks the metadata service, it will always have access to up to date and valid credentials.
The EC2 service once this expiry date closes in and once the expiry date is in the past, these credentials will be renewed and a new valid set of credentials will automatically be presented via the metadata service to any applications running on this EC2 instance.
Now just one more thing that I do want to show you before we finish up with this demo lesson.
And I have made sure that I've attached this link to the lesson.
This link shows the configuration settings and precedence that the command line utilities use in order to interact with AWS.
So whenever you use the command line interface, each of these is checked in order.
First, it looks at command line options.
Then it looks at environment variables to check whether any credentials are stored within environment variables.
Then it checks the command line interface credentials file.
So this is stored within the dot AWS folder within your home folder and then a file called credentials.
Next, it checks the CLI configuration file.
Next, it checks container credentials.
And then finally, it checks instance profile credentials.
And these are what we've just demonstrated.
Now, this does mean that if you manually configure any long term credentials for the CLI tools as part of using AWS Configure, then they will be used as a priority over an instance profile.
But you can use an instance profile and attach this to many different instances as a best practice way of providing them with access into AWS products and services.
So that's really critical to understand.
But at this point, that is everything that I wanted to cover in this demo lesson.
And all that remains is for us to tidy up the infrastructure that we've used as part of this demo.
So to tidy up this infrastructure, I want you to go back to the IAM console.
I want you to click on roles and I want you to delete the A4L instance role that you've just created.
So select it and then click on delete role.
Once you've deleted that role, go back to the EC2 console, click on instances, right click on public EC2, go to security, modify IAM role.
Now, even though you've deleted the IAM role, note how it's still listed.
That's because this is an instance profile.
This is showing the instance profile that gets created with the role, not the role itself.
So what we're going to do, and I just wanted to do this to demonstrate how this works, we're just going to select no IAM role and then click on save.
We'll need to confirm that.
So to do that, we need to type detach into this box and then confirm it by clicking detach.
That removes the instance role entirely from the instance.
And then we can finish up the tidy process by moving back to the cloud formation console.
Selecting the IAM role demo stack and then clicking on delete and confirming that deletion.
And that will put the account back in the same state as it was at the start of this demo lesson.
So this has been a very brief demo.
I just wanted to give you a little bit of experience of working with instance roles.
So that's EC2 instances combined with IAM roles in order to give an instance and any applications running on that instance, the ability to interact with AWS products and services.
And this is something that you're going to be using fairly often throughout the course, specifically when you're configuring any AWS services to interact with any other services on your behalf.
That's a common use case for using IAM roles and we'll be using instance roles extensively to allow our EC2 instances to interact with other AWS products and services.
But at this point, that is everything that I wanted to cover in this demo lesson.
So go ahead, complete the video and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and I've mentioned a few times now within the course that I am roles are the best practice way that AWS services can be granted permissions to other AWS services on your behalf.
Allowing a service to assume a role grants the service the permissions that that role has.
EC2 instance roles are roles that an instance can assume and anything running in that instance has the permissions that that role grants and there is some detail involved which matters so let's take a look at how this feature of EC2 works architecturally.
Instance role architecture isn't really all that complicated it starts off with an I am role and that role has a permissions policy attached to it so whoever assumes the role gets temporary credentials generated and those temporary credentials give the permissions that that permissions policy would grant.
Now an EC2 instance role allows the EC2 service to assume that role which means there's an EC2 instance itself can assume it and gain access to those credentials but we need some way of delivering those credentials into the EC2 instance so that applications running inside that instance can use the permissions that the role provides so there's an intermediate piece of architecture the instance profile and this is a wrapper around an I am role and the instance profile is the thing that allows the permissions to get inside the instance when you create an instance role in the console an instance profile is created with the same name but if you use the command line or cloud formation you need to create these two things separately when using the UI and you think you're attaching an instance role direct to an instance you're not you're attaching an instance profile of the same name it's the instance profile that's attached to an EC2 instance.
We know by now that when I am roles are assumed you're provided with temporary security credentials which expire but these credentials grant permissions based on the roles permissions policy will inside an EC2 instance these credentials are delivered via the instance metadata.
An application running inside the instance can access these credentials and use them to access AWS resources such as S3.
One of the great things about this architecture is that the credentials available inside the metadata they're always valid EC2 and the secure token service liaise with each other to ensure that the credentials are always renewed before they expire as long as your application inside the EC2 instance keeps checking the metadata it will never be in a position where it has expired credentials.
So to summarize when you use EC2 instance roles the credentials are delivered via the instance metadata specifically inside the metadata there's an IAM tree in there there's a security credentials part and then in there is the role name and if you access this you'll get access to these temporary security credentials and they're always rotated they're always valid as long as that instance role remains attached to the instance anything running in the instance will always have access to these valid credentials applications running in the instance of course need to be careful about caching these credentials and just check the metadata before the credentials expire or do it periodically.
You should always use roles where possible I'm going to keep stressing that throughout the course it's important for the exam roles are always preferable than storing long-term credentials so access keys into an EC2 instance it's never a good idea to store long-term credentials such as access keys anywhere which aren't securely stored so for example on your local machine.
In fact the AWS tooling such as the CLI tools will use instance role credentials automatically so as long as the instance role is attached to the EC2 instance any command line tools running inside that instance can automatically make use of those credentials.
So at this point that's everything I wanted to cover thanks for watching go ahead and complete this video and when you're ready join me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this brief demonstration you'll have the opportunity to create an EC2 instance with WordPress bootstrapped in ready and waiting to be configured.
But this time you'll be using an enhanced CloudFormation template which uses CFN init and creation policies rather than the simple user data that you used in the previous demonstration.
To get started just make sure you are logged in to the general AWS account as the I am admin user and as always make sure you've got the northern Virginia region selected.
Now attached to this lesson are two one click deployment links.
Go ahead and use the first one which is the VPC link.
Everything should be pre-populated.
All you'll need to do is scroll down to the bottom, check the acknowledgement box and click on create stack.
Once it's moved into a create complete status you can resume and we'll carry on with the demo.
I'll assume that that's now in a create complete status and now we're going to apply another CloudFormation template.
This is the template that we'll be using.
It's just an enhancement of the one that you used in the previous lesson.
This time instead of using a set of procedural instructions, so a script that are passed into the user data, this uses the CFN init system and creation policies.
So let's have a look at exactly what that means.
If I scroll down and locate the EC2 instance logical resource, then here we've got this creation policy.
This means that CloudFormation is going to create a hold point.
It's not going to allow this resource to move into a create complete status until it receives a signal.
And it's going to wait 15 minutes for this signal.
So a timeout of 15 minutes.
Now scrolling down and looking at the user data, the only things we do in a procedural way, we use the CFN init command to begin the desired state configuration.
That will either succeed or not.
And based on that we use the CFN signal command to pass that success or failure state back to the CloudFormation stack.
And that's what uses this creation policy.
So the creation policy will wait for a signal and it's this command which provides that signal, either a success signal or a failure signal.
Now what we're interested in specifically for this demo lesson is this CFN init command.
So this is the thing that pulls the desired state configuration from the metadata of this logical resource.
I'll talk all about that in a second.
But it pulls that down by being given the stack ID and it uses this substitution command.
So instead of this being passed into the instance, what's actually passed instead of this variable name, so the stack ID variable name, is the actual stack ID.
And then likewise, instead of this variable name, aws colon region is passed to the actual region that this template is being applied into.
So that's what the substitution function does.
It replaces any variable or parameter names with the values of those variables or parameters.
So the CFN init process is then able to consult the CloudFormation stack and retrieve the configuration information.
That's all stored in the metadata section of this logical resource.
Now I just want to draw your attention to this double hyphen config sets wordpress underscore install.
This tells us what set of instructions we want CFN init to run.
So if I just expand the metadata section here, we've got one or more config sets defined.
In this case, we've only got the one which is wordpress underscore install.
And this config set runs five individual items, one after the other.
And these are called config keys.
So install CFN, software install, configure instance, install wordpress and configure wordpress.
Now these reference the config keys defined below.
So you'll see that the same name install CFN, software install, configure instance, install wordpress and configure wordpress.
You'll recognize a lot of the commands used because they're the same commands that install and configure wordpress.
So in the software install config key, we're using the DNF package manager to install various software packages that we need for this installation, such as WGet, MariaDB, the Apache web server and various other utilities.
Then another part is services and we're specifying that we want these services to be enabled and to be running.
So this means that the service will be set to start up on instance boot and it will make sure that it's running right now.
The next config key is configure instance.
The files component of this can create files with a certain content.
So we're creating a file called etc update-motd.d/40-cow.
This is the part that we had to do manually before and this is the thing that adds the cow say banner.
Then we're running some more procedural commands to set the database root password and to update this banner.
Then we've got install wordpress, which uses a sources option to expand whatever is specified here into this directory.
So this automatically handles the download and the unjzip and untarring of this archive into this folder and it can even do that with authentication if needed.
We're creating another file this time to perform the configuration of wordpress and another file this time to create the database for wordpress.
Then finally we've got the configure wordpress which fixes up the permissions and creates these databases.
So this is doing the same thing as the procedural example in the previous demo.
Instead of running all of these commands one by one, this is just using desired state.
Now there is one more thing that I wanted to point out right at the top.
This is the part that configures CFN init to keep watching the logical resource configuration inside the cloud formation stack.
And if it notices that the metadata for EC2 instance inside the stack changes, then it will run CFN init again.
Remember how in the theory lesson I mentioned that this process could cope with stack updates.
So it doesn't only run once like user data does.
Well, this is how it does that.
This configures this automatic update that keeps an eye on the cloud formation stack and reruns CFN init whenever any changes occur.
This is well beyond what you need for the associate exam.
I just want you to be aware of what this is and how it works.
Essentially we're setting up a process called CFN hop and making it watch the cloud formation stack for any configuration changes.
And then we're setting it up so that the CFN hop process is enabled and running so that it can watch the resource configuration constantly.
So that's it for this template.
What we'll do now is apply it.
So go ahead and click on the second one click deployment link attached to this lesson.
It should be called A4LEC2CFN init.
So click that link.
All you'll need to do is scroll down to the bottom and then click on create stack.
Now this time remember we're using a creation policy.
So cloud formation is not going to move this logical ID and to create complete when EC2 signals that the launch process is completed.
Instead it's going to wait until the instance itself signals the successful completion of the CFN init process.
So because we're using this creation policy it's going to hold until the instance operating system using CFN-signal provide a signal to cloud formation to say yep everything's okay and at that point the logical resource will move into create complete.
So that's going to take a couple of minutes.
The EC2 instance will need to actually launch itself and pass its status checks and then the CFN init process will run, perform all of the configuration required and then assuming the status code of that is okay then CFN-signal will take that status code and respond to the cloud formation stack with a successful completion and then the process will move on then cloud formation will mark the particular resources complete and the stack is complete.
Now that will take a few minutes so just keep hitting refresh and you should see the status update after two to three minutes but go ahead and pause the video and resume it once your stack moves into the create complete status.
And there we go at this point the stack has moved into the create complete status and I just want to draw your attention to this line.
You won't have seen this before.
This is the line where our EC2 instance has run the CFN init process successfully and then the CFN signal command has taken that success signal and delivered it to the cloud formation stack.
So this is the signal that cloud formation was waiting for before moving this resource into a create complete status and that's what's needed before the stack itself could move into a create complete status.
So now we explicitly know that the configuration of this instance has actually been completed.
So we're not relying on EC2 telling us that the instance status is now running with two out of two checks.
Instead the operating system itself the CFN init process that's completed successfully and the CFN signal process has explicitly indicated to cloud formation that that whole process has been completed.
So if we move across to the EC2 console we should be able to connect to the instance exactly as we've done before.
Look for the running instance and select it.
Copy the public IP version 4 IP address and open that in a new tab.
All being well you should see the familiar WordPress installation screen.
If you're right click on that instance and put connect.
Go to instance connect and hit connect that will connect you into the instance and you should be greeted by the cow themed login banner.
This time if we use curl to show us the contents of user data this time it's only a small number of lines because the only thing that runs is the CFN init process and the CFN signal process.
Notice though how all of these variable names have been replaced with their values so the stack IDs and the region.
So this is how it knows to communicate with the right stack in the right region inside cloud formation.
If we do a CD space forward slash var forward slash log and then do a listing we've still got these original two files so cloud hyphen init dot log and cloud hyphen init hyphen output dot log.
So these are primarily associated with the user data output.
But now we've also got these new log files so CFN hyphen init hyphen CMD dot log and that is an output of the CFN init process.
So if we cat that so shudu space cat space and then the name of that log file this will show us an output of the CFN init process itself.
So we can see each of the individual config keys running and what individual operations are being performed inside each of those keys.
So it's a more complex but a more powerful process.
And at this point that's everything I wanted to cover.
It was just to give you practical exposure to an alternative to raw user data and that was CFN hyphen init.
It's a much more powerful system especially when combined with cloud formation creation policies which allow us to pause the progress of a cloud formation stack waiting for the resource itself to explicitly say yes I finished off all of my bootstrapping process you're good to carry on and that's done using the CFN hyphen signal command.
Now at this point let's just clean up the account move back to cloud formation.
Once you there go ahead and delete the EC2 CFN init stack wait for that process to complete and once you've done that go ahead and delete the A4L VPC stack and that will return the AWS account into the state that you had it at the start of this demo.
At that point thanks for doing this demo I hope it was useful.
You can go ahead and complete this video now and when you're ready you can join me in the next.
-
- Oct 2024
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to get the experience of bootstrapping an EC2 instance using user data.
So this is the ability to run a script during the provisioning process for an EC2 instance and automatically add a certain configuration to that instance during the build process.
So this is an alternative to creating a custom AMI.
Earlier in the course you created an Amazon machine image with the WordPress installation and configuration baked in.
Now that's really quick and simple but it does limit your ability to make changes to that configuration.
So the configuration is baked into the AMI and so you're limited as to what you can change during launch time.
With boot strapping you have the ability to perform all the steps in the form of a script during the provisioning process and so it can be a lot more flexible.
Now to get started we need to create the Animals for Life VPC within our general AWS account.
So this is the management account of the organization.
So make sure that you're logged into the IAM admin user of this account and as always make sure you have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link so go ahead and open that.
This is going to take you to the quick create stack page and everything should be pre-populated.
The stack name should be bootstrap everything else has appropriate default so just scroll down to the bottom, check the capabilities acknowledgement box and then go ahead and click on create stack.
Now this will create the Animals for Life VPC which contains the public subnets that we'll be launching our instance into and so we're going to need this to be in a create complete state before we move on.
So go ahead and pause the video and once your stack changes from create in progress to create complete then we good to continue.
Okay so now that that stack has moved into a create complete state we good to continue.
Now also attached to this lesson is another link which is the user data that we're going to use for this demo lesson so go ahead and open that link.
This is the user data that we're going to use to bootstrap the EC2 instance so what I want you to do is to download this file to your local machine and then open it in a code editor or alternatively just copy all the text on screen now and paste that into a code editor.
So I've gone ahead and opened that file in my text editor and if you look through all of the different commands contained within this user data .txt file then you should recognize some of them.
These are basically the commands that we ran earlier in the course when we manually installed word press and when we created the Amazon machine image.
So we're essentially installing the MariaDB database server, the Apache web server, Wget and Cowsay.
We're installing PHP and its associated libraries.
We're making sure that both the database and the web server are set to automatically start when the instance reboots and are explicitly started when this script is run.
We're setting the root password of the MariaDB database server.
We're downloading the latest copy of the WordPress installation archive.
We're extracting it and we're moving the files into the correct locations.
Then we're configuring WordPress by copying the sample configuration file into the final and proper file name so wp-config.php and then we're performing a search and replace on those placeholders and replacing them with our actual chosen values for the database name, the database user and the database password.
And then after that we're fixing up the permissions on the web root folder with the WordPress installation files inside so we're making sure that the ownership is correct and then we're fixing up the permissions with a slightly improved version of what we've used previously.
Then we're creating our DB.setup script in the same way that we did when we were manually installing WordPress.
We're logging into the database using the MySQL command line utility, authenticating as the root user with the root password and then running this script and this creates the WordPress database, the user sets the password and gives that user permissions on the database.
And then finally we're configuring the Cowsay utility so we're setting up the message of the day file we're outputting our animals for life custom greeting and then we're forcing a refresh of the login banner.
So these are all of the steps that you've previously done manually so I hope it's still fresh in your memory just how annoying that manual installation was.
Okay so at this point this user data is ready to go and I want to demonstrate to you how you can use this to bootstrap an EC2 instance.
So let's go ahead and move back to the AWS console.
Once we're at the AWS console this CloudFormation 1 click deployment has created the Animals for Life VPC.
So what we're going to do is to click on the services drop down and then move to the EC2 console and go ahead and click on launch instance followed by launch instance again.
So first things first the instance is going to be called a4l for animals for life - manual WordPress so go ahead and enter that in the box at the top then scroll down select Amazon Linux and then make sure Amazon Linux 2023 is selected in the drop down and then make sure that you've got 64-bit x86 selected.
I want you to pick whichever type is free tier eligible within your account and region in my case it's t2.micro but you should pick the one that's free tier eligible.
Under key pair go ahead and pick proceed without a key pair then scroll down to network settings and click on edit and there are a few items on this page that we need to explicitly configure.
The first is we need to select the Animals for Life VPC next to network so select a4l -vpc1 next to subnet I want you to go ahead and pick sn -web -a so that's the web or public subnet within availability zone a then make sure auto assign public IP is set to enable we'll be using an existing security group so check that box and then in the drop down so click the drop down and select the bootstrap -instance security group so bootstrap was the name of the cloud formation stack that we created using the one-click deployment we won't be making any changes to the storage configuration and next we need to scroll down to an option that we've not used before we're going to enter some user data so scroll all the way down and under advanced details expand this if it isn't already and you're looking for the user data box what we're going to do is paste in the user data that you just downloaded so in my case this is the user data.txt which I downloaded so I'm going to go ahead and select all of the information in this user data.txt making sure I get everything including the last line and I'm going to copy that into my clipboard now back at the AWS console we need to paste that in to the user data box now by default EC2 accepts user data as base64 encoded data so we need to provide it with base64 encoded data and we're not we're just giving it a normal text file so in this case the user interface can actually do this conversion for us so if what you're pasting in is not base64 encoded and what we're pasting in isn't then we don't need to do anything else if we're pasting in data which is already base64 encoded we need to check this box below the user data box we don't need to worry about that because we're not pasting in anything with base64 encoding so we can just paste in our user data directly into this box and this will be run during the instance launch process so this is where our automatic configuration comes from this is what will bootstrap the EC2 instance okay so that's everything we need to configure so go ahead and click on launch instance now at this point while this is launching I want you to keep in mind that in the previous demo examples in this course we manually launched an instance and then once the instance was in a running state we had to connect into it download WordPress install WordPress and then configure WordPress along with all of the other associated dependencies that WordPress requires so that was a fairly time-intensive process that was open to errors in the AMI example we followed that same process but at the end we created the Amazon machine image so keep that in mind and compare it to what your experience is in this demo lesson so now we've launched the instance and it's now in a running state and we've provided some user data to this instance so I want you to leave it a couple of minutes after it's showing in a running state just give it a brief while to perform that additional configuration after a few minutes go ahead and right click on that instance and select connect we're going to be using EC2 instance connect so make sure that's selected make sure the user is set to EC2 - user and then just click connect now what you should see if we've given this enough time is our custom animals for life login banner and that means that the bootstrapping process has completed think about this for a minute as part of the launch process EC2 has provisioned us an EC2 instance and it's also run a relatively complex installation and configuration script that we've supplied in the form of user data and that's downloaded and installed WordPress and configured our custom login banner if we go back to EC2 select instances and then if we copy the public IP address into our clipboard so copy the actual IP address do not click on this link because this will open it using HTTPS which we haven't configured if you take that IP address and open that in a new tab you'll see the installation dialogue for WordPress and that's because the bootstrapping process using the user data has done all the configuration process that previously we've had to do manually now if we go back to the instance I want to demonstrate architecturally and operationally exactly how this works what we can do is use the curl utility to review the instance metadata now because we're using Amazon Linux 2023 we need to do this slightly differently we need to use version 2 of the metadata service so first we need to run this command to get a token which we can use to authenticate to the metadata service so run this next we can run this command which gets us the metadata of the instance and this uses the 169254 169254 address or as I like to call it 169.254 repeating now if we use this with meta hyphen data on the end then we get the metadata service but as we know user data is a component of the metadata service so instead of using forward slash latest forward slash metadata we can replace metadata with user data and this will allow us to see the user data supplied to the instance and don't worry all of these commands will be attached to the lesson so you should recognize this this is the user data that we passed into the instance so this is performed a download a configuration and an installation of Apache the database server and WordPress as well as our custom login banner so that's how the user data gets into the EC2 instance and there's a service running on the EC2 instance which takes this data and automatically performs these configuration steps essentially this is run as a script on the operating system now something else we can do is to move into the forward slash VAR forward slash log folder and this is a folder which contains many of the system logs and if we do an LS space hyphen LA we'll see a collection of logs within this folder there are two logs in particular that are really useful for diagnosing bootstrapping related problems these logs are cloud hyphen init dot log and cloud hyphen init hyphen output dot log and both of these are used for slightly different reasons so what I want to do is to output one of these logs and show you the content so we're going to output using shudu first to get admin permissions and then cat and we're going to use the cloud hyphen init hyphen output dot log and I'm going to press enter and that's going to show you the contents of this file and you'll be able to see using this log file exactly what's been executed on this EC2 instance so you'll be able to see all of the actual commands and the output from those commands as they've been executed on this EC2 instance so you'll be able to see all of the WordPress related downloads and copies the replacements of the database usernames and passwords the permissions fix section the database creation user creation and then permissions on that database as well as the command that actually executes those and then right at the bottom is where we configure our custom login banner so this is how you can see exactly what's been run on this EC2 instance and if you ever encounter any issues with any of the demo lessons within this course or any of my courses then you can use this file to determine exactly what's happened on the EC2 instance as part of the bootstrapping process okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part two will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to be creating an ECS cluster with the Fargate cluster mode and using the container of CATS container that we created together earlier in this section of the course, you're going to deploy this container into your Fargate cluster.
So you're going to get some practical experience of how to deploy a real container into a Fargate cluster.
Now you won't need any cloud formation templates applied to perform this demo because we're going to use the default VPC.
All that you'll need is to be logged in as the IAM admin user inside the management account of the organization and just make sure that you're in the Northern Virginia region.
Once you've confirmed that then just click in Find Services and type ECS and then click to move to the ECS console.
Once you're at the ECS console, step one is to create a Fargate cluster.
So that's the cluster that our container is going to run inside.
So click on clusters, then create cluster.
You'll need to give the cluster a name.
You can put anything you want here, but I recommend using the same as me and I'll be putting all the CATS.
Now Fargate mode requires a VPC.
I'm going to be suggesting that we use the default VPC because that's already configured, remember, to give public IP addresses to anything deployed into the public subnets.
So just to keep it simple and avoid any extra configuration, we'll use the default VPC.
Now it should automatically select all of the subnets within the default VPC, in my case all six.
If yours doesn't, just make sure you select all of the available subnets from this dropdown, but it should do this by default.
Then scroll down and just note how AWS Fargate is already selected and that's the default.
If you wanted to, you could check to use Amazon EC2 instances or external instances using ECS anywhere, but for this demo, we won't be doing that.
Instead, we'll leave everything else as default, scroll down to the bottom and click create.
If this is the first time you're doing this in an AWS account, it's possible that you'll get the error that's shown on screen now.
If you do get this error, then what I would suggest is to wait a few minutes, then go back to the main ECS console, go to cluster again and then create the all the cats cluster again.
So follow exactly the same steps, call the cluster all the cats, make sure that the animals for live default VPC is selected and all those subnets are present, and then click on create.
You should find that the second time that you run this creation process, it works okay.
Now this generally happens because there's an approval process that needs to happen behind the scenes.
So if this is the first time that you're using ECS within this AWS account, then you might get this error.
It's nothing to worry about, just rerun the process and it should create fine the second time.
Once you've followed that process through again, or if it works the first time, then just go ahead and click on the all the cats cluster.
So this is the Fargate based cluster.
It's in an active state, so we're good to deploy things into this cluster.
And we can see that we've got no active services.
If I click on tasks, we can see we've got no active tasks.
There's a tab here, metrics where you can see cloud watch metrics about this cluster.
And again, because this is newly created and it doesn't have any activity, all of this is going to be blank.
For now, that's fine.
What we need to do for this demonstration is create a task definition that will deploy our container, our container of cats container into this Fargate cluster.
To do that, click on task definitions and create a new task definition.
You'll need to pick a name for your task definition.
Go ahead and put container of cats.
And then inside this task definition, the first thing to do set the details of the container for this task.
So under container details under name, go ahead and put container of cats web.
So this is going to be the web container for the container of cats task.
Then next to the name under image URI, you need to point this at the docker image that's going to be used for this container.
So I'm going to go ahead and paste in the URI for my docker image.
So this is the docker image that I created earlier in the course within the EC2 docker demo.
You might have also created your own container image.
You can feel free to use my container image or you can use yours.
If you want to keep things simple, you should go ahead and use mine.
Yours should be the same anyway.
Now just to be careful, this isn't a URL.
This is a URI to point at my docker image.
So it consists of three parts.
First we have docker.io, which is the docker hub.
Then we have my username, so acantral.
And then we have the repository name, which is container of cats.
So if you want to use your own docker image, you need to change both the username and the repository name.
Again, to keep things simple, feel free to use my docker image.
Then scrolling down, we need to make sure that the port mappings are correct.
It should show what's on screen now, so container port 80, TCP.
And then the port name should be the same or similar to what's on screen now.
Don't worry if it's slightly different and the application protocol should be HTTP.
This is controlling the port mapping from the container through to the Fargate IP address.
And I'll talk more about this IP address later on in this demo.
Everything else looks good, so scroll down to the bottom and click on next.
We need to specify some environment details.
So under operating system/architecture, it needs to be linux/x86_64.
Under task size for memory, go ahead and select 1GB and then under CPU, 0.5 vCPU.
That should be enough resources for this simple docker application.
Scroll down and under monitoring and logging, uncheck use log collection.
We won't be needing it for this demo lesson.
That's everything we need to do.
Go ahead and click on next.
This is just an overview of everything that we've configured, so you can scroll down to the bottom and click on create.
And at this point, the task definition has been created successfully.
And this is where you can see all of the details of the task definition.
If you want to see the raw JSON for the task definition itself, you don't need this for the exam, but this is actually what a task definition looks like.
So it contains all of this different information.
What it has got is one or more container definitions.
So this is just JSON.
This is a list of container definitions.
We've only got the one.
And if you're looking at this, you can see where we set the port mapping.
So we're mapping port 80.
You can see where it's got the image URL, which is where it pulls the docker image from.
This is exactly what a normal task and container definition look like.
They can be significantly more complex, but this format is consistent across all task definitions.
Okay, so now it's time to launch a task.
It's time to take the container and task definitions that we've defined and actually run up a container inside ECS using those definitions.
So to do that, click on clusters and then select the all the cats cluster.
Click on tasks and then click on run a new task.
Now, first we need to pick the compute options and we're going to select launch type.
So check that box.
If appropriate for the certification that you're studying for, I'll be talking about the differences between these two in a different lesson.
Once you've clicked on launch type, make sure Fargate is selected in the launch type drop down and latest is selected under platform version.
Then scroll down and we're going to be creating a task.
So make sure that task is selected.
Scroll down again and under family, make sure container of cats is selected.
And then under revision, select latest.
We want to make sure the latest version is used and we'll leave desired tasks at one and task group blank.
Scroll down and expand networking.
Make sure the default VPC is selected and then make sure again that all of the subnets inside the default VPC are present under subnets.
The default is that all of them should be in my case six.
Now the way that this task is going to work is that when the task is run within Fargate, an elastic network interface is going to be created within the default VPC.
And that elastic network interface is going to have a security group.
So we need to make sure that the security group is appropriate and allows us to access our containerized application.
So check the box to say create a new security group and then for security group name and description, use container of cats -sg.
We need to make sure that the rule on this security group is appropriate.
So under type select HTTP and then under source change this to anywhere.
And this will mean that anyone can access this containerized application.
Finally make sure that public IP is turned on.
This is really important because this is how we'll access our containerized application.
Everything else looks good.
We can scroll down to the bottom and click on create.
Now give that a couple of seconds.
It should initially show last status.
So the last status should be set to provisioning and the desired state should be set to running.
So we need to wait for this task provisioning to complete.
So just keep hitting refresh.
You'll see it first change into pending.
Now at this point we need this task to be in a running state before we can continue.
So go ahead and pause the video and wait for both of these states.
So last status and desired status both of those need to be running before we continue.
So pause the video, wait for both of those to change and then once they have you can resume and will continue.
After another refresh the last status should now be running and in green and the desired state should also be running.
So at that point we're good to go.
We can click on the task link below.
We can scroll down and our task has been allocated a private IP version for address in the default VPC and also a public IP version for address also in the default VPC.
So if we copy this public IP into our clipboard and then open a new tab and browse to this IP we'll see our very corporate professional web application.
If it fits, I sits in a container in a container.
So we've taken a Docker image that we created earlier in this section of the course.
We've created a Fargate cluster, created a task definition with a container definition inside and deployed our container image as a container to this Fargate cluster.
So it's a very simple example, but again this scales.
So you could deploy Docker containers which are a lot more complex in what functionality they offer.
In this case it's just an Apache web server loading up a web page but we could deploy any type of web application using the same steps that you've performed in this demo lesson.
So congratulations, you've learned all of the theory that you'll need for the exam and you've taken the steps to implement this theory in practice by deploying a Docker image as a container on an ECS Fargate cluster.
So great job.
At this point all that remains is to tidy up.
So go back to the AWS console.
Just stop this container.
Click on stop.
Click on task definitions and then go into this task definition.
Select this.
Click on actions, deregister and then click on deregister.
Click back on task definitions and make sure there's no results there.
That's good.
Click on clusters.
Click on all the cats.
Delete the cluster.
You'll need to type delete space all the cats and then click on delete to confirm.
And at that point the Fargate cluster has been deleted.
The running container has been stopped.
The task definitions been deleted and our account is back in the same state as when we started.
So at this point you've completed the demo.
You've done great and you've implemented some pretty complex theory.
So you should already have a head start on any exam questions which involve ECS.
We're going to be using ECS a lot more as we move through the course and we're going to be using it in some of the Animals for Life demos as we implement progressively more complex architectures later on in the course.
For now I just wanted to give you the basics but you've done really well if you've implemented this successfully without any issues.
So at this point go ahead, complete this video and when you're ready join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to learn how to install the Docker engine inside an EC2 instance and then use that to create a Docker image.
Now this Docker image is going to be running a simple application and we'll be using this Docker image later in this section of the course to demonstrate the Elastic Container service.
So this is going to be a really useful demo where you're going to gain the experience of how to create a Docker image.
Now there are a few things that you need to do before we get started.
First as always make sure that you're logged in to the I am admin user of the general AWS account and you'll also need the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link so go ahead and click that now.
This is going to deploy an EC2 instance with some files pre downloaded that you'll use during the demo lesson.
Now everything's pre-configured you just need to check this box at the bottom and click on create stack.
Now that's going to take a few minutes to create and we need this to be in a create complete state.
So go ahead and pause the video wait for your stack to move into create complete and then we're good to continue.
So now this stack is in a create complete state and we're good to continue.
Now if you're following along with this demo within your own environment there's another link attached to this lesson called the lesson commands document and that will include all of the commands that you'll need to type as you move through the demo.
Now I'm a fan of typing all commands in manually because I personally think that it helps you learn but if you are the type of person who has a habit of making mistakes when typing along commands out then you can copy and paste from this document to avoid any typos.
Now one final thing before we finish at the end of this demo lesson you'll have the opportunity to upload the Docker image that you create to Docker Hub.
If you're going to do that then you should pre sign up for a Docker Hub account if you don't already have one and the link for this is included attached to this lesson.
If you already have a Docker Hub account then you're good to continue.
Now at this point what we need to do is to click on the resources tab of this stack and locate the public EC2 resource.
Now this is a normal EC2 instance that's been provisioned on your behalf and it has some files which have been pre downloaded to it.
So just go ahead and click on the physical ID next to public EC2 and that will move you to the EC2 console.
Now this machine is set up and ready to connect to and I've configured it so that we can connect to it using Session Manager and this avoids the need to use SSH keys.
So to do that just right-click and then select connect.
You need to pick Session Manager from the tabs across the top here and then just click on connect.
Now that will take a few minutes but once connected you should see this prompt.
So it should say SH- and then a version number and then dollar.
Now the first thing that we need to do as part of this demo lesson is to install the Docker engine.
The Docker engine is the thing that allows Docker containers to run on this EC2 instance.
So we need to install the Docker engine package and we'll do that using this command.
So we're using shudu to get admin permissions then the package manager DNF then install then Docker.
So go ahead and run that and that will begin the installation of Docker.
It might take a few moments to complete it might have to download some prerequisites and you might have to answer that you're okay with the install.
So press Y for yes and then press enter.
Now we need to wait a few moments for this install process to complete and once it has completed then we need to start the Docker service and we do that using this command.
So shudu again to get admin permissions and then service and then the Docker service and then start.
So type that and press enter and that starts the Docker service.
Now I'm going to type clear and then press enter to make this easier to see and now we need to test that we can interact with the Docker engine.
So the most simple way to do that is to type Docker space and then PS and press enter.
Now you're going to get an error.
This error is because not every user of this EC2 instance has the permissions to interact with the Docker engine.
We need to grant permissions for this user or any other users of this EC2 instance to be able to interact with the Docker engine and we're going to do that by adding these users to a group and we do that using this command.
So shudu for admin permissions and then user mod -a -g for group and then the Docker group and then EC2 -user.
Now that will allow a local user of this system, specifically EC2 -user, to be able to interact with the Docker engine.
Okay so I've cleared the screen to make it slightly easier to see now that we've added EC2 -user the ability to interact with Docker.
So the next thing is we need to log out and log back in of this instance.
So I'm going to go ahead and type exit just to disconnect from session manager and then click on close and then I'm going to reconnect to this instance and you need to do the same.
So connect back in to this EC2 instance.
Now once you're connected back into this EC2 instance we need to run another command which moves us into EC2 user so it basically logs us in as EC2 -user.
So that's this command and the result of this would be the same as if you directly logged in to EC2 -user.
Now the reason we're doing it this way is because we're using session manager so that we don't need a local SSH client or to worry about SSH keys.
We can directly log in via the console UI we just then need to switch to EC2 -user.
So run this command and press enter and we're now logged into the instance using EC2 -user and to test everything's okay we need to use a command with the Docker engine and that command is Docker space ps and if everything's okay you shouldn't see any output beyond this list of headers.
What we've essentially done is told the Docker engine to give us a list of any running containers and even though we don't have any it's not erred it's simply displayed this empty list and that means everything's okay.
So good job.
Now what I've done to speed things up if you just run an LS and press enter the instance has been configured to download the sample application that we're going to be using and that's what the file container.zip is within this folder.
I've configured the instance to automatically extract that zip file which has created the folder container.
So at this point I want you to go ahead and type cd space container and press enter and that's going to move you inside this container folder.
Then I want you to clear the screen by typing clear and press enter and then type ls space -l and press enter.
Now this is the web application which I've configured to be automatically downloaded to the EC2 instance.
It's a simple web page we've got index.html which is the index we have a number of images which this index.html contains and then we have a docker file.
Now this docker file is the thing that the docker engine will use to create our docker image.
I want to spend a couple of moments just stepping you through exactly what's within this docker file.
So I'm going to move across to my text editor and this is the docker file that's been automatically downloaded to your EC2 instance.
Each of these lines is a directive to the docker engine to perform a specific task and remember we're using this to create a docker image.
This first line tells the docker engine that we want to use version 8 of the Red Hat Universal base image as the base component for our docker image.
This next line sets the maintainer label it's essentially a brief description of what the image is and who's maintaining it in this case it's just a placeholder of animals for life.
This next line runs a command specifically the yum command to install some software specifically the Apache web server.
This next command copy copies files from the local directory when you use the docker command to create an image so it's copying that index.html file from this local folder that I've just been talking about and it's going to put it inside the docker image in this path so it's going to copy index.html to /var/www/html and this is where an Apache web server expects this index.html to be located.
This next command is going to do the same process for all of the jpegs in this folder so we've got a total of six jpegs and they're going to be copied into this folder inside the docker image.
This line sets the entry point and this essentially determines what is first run when this docker image is used to create a docker container.
In this example it's going to run the Apache web server and finally this expose command can be used for a docker image to tell the docker engine which services should be exposed.
Now this doesn't actually perform any configuration it simply tells the docker engine what port is exposed in this case port 80 which is HTTP.
Now this docker file is going to be used when we run the next command which is to create a docker image.
So essentially this file is the same docker file that's been downloaded to your EC2 instance and that's what we're going to run next.
So this is the next command within the lesson commands document and this command builds a container image.
What we're essentially doing is giving it the location of the docker file.
This dot at the end contains the working directory so it's here where we're going to find the docker file and any associated files that that docker file uses.
So we're going to run this command and this is going to create our docker image.
So let's go ahead and run this command.
It's going to download version 8 of UBI which it will use as a starting point and then it's going to run through every line in the docker file performing each of the directives and each of those directives is going to create another layer within the docker image.
Remember from the theory lesson each line within the docker file generally creates a new file system layer so a new layer of a docker image and that's how docker images are efficient because you can reuse those layers.
Now in this case this has been successful.
We've successfully built a docker image with this ID so it's giving it a unique ID and it's tagged this docker image with this tag colon latest.
So this means that we have a docker image that's now stored on this EC2 instance.
Now I'll go ahead and clear the screen to make it easier to see and let's go ahead and run the next command which is within the lesson commands document and this is going to show us a list of images that are on this EC2 instance but we're going to filter based on the name container of cats and this will show us the docker image which we've just created.
So the next thing that we need to do is to use the docker run command which is going to take the image that we've just created and use it to create a running container and it's that container that we're going to be able to interact with.
So this is the command that we're going to use it's the next one within the lesson commands document.
It's docker run and then it's telling it to map port 80 on the container with port 80 on the EC2 instance and it's telling it to use the container of cats image and if we run that command docker is going to take the docker image that we've got on this EC2 instance run it to create a running container and we should be able to interact with that container.
So if you go back to the AWS console if we click on instances so look for a4l-public EC2 that's in the running state.
I'm just going to go ahead and select this instance so that we can see the information and we need the public IP address of this instance.
Go ahead and click on this icon to copy the public IP address into your clipboard and then open that in a new tab.
Now be sure not to use this link to the right because that's got a tendency to open the HTTPS version.
We just need to use the IP address directly.
So copy that into your clipboard open a new tab and then open that IP address and now we can see the amazing application if it fits i sits in a container in a container and this amazing looking enterprise application is what's contained in the docker image that you just created and it's now running inside a container based off that image.
So that's great everything's working as expected and that's running locally on the EC2 instance.
Now in the demo lesson for the elastic container service that's coming up later in this section of the course you have two options.
You can either use my docker image which is this image that I've just created or you can use your own docker image.
If you're going to use my docker image then you can skip this next step.
You don't need a docker hub account and you don't need to upload your image.
If you want to use your own image then you do need to follow these next few steps and I need to follow them anyway because I need to upload this image to docker hub so that you can potentially use it rather than your own image.
So I'm going to move back to the session manager tab and I'm going to control C to exit out of this running container and I'm going to type clear to clear the screen and make it easier to see.
Now to upload this to docker hub first you need to log in to docker hub using your credentials and you can do that using this command.
So it's docker space login space double hyphen username equals and then your username.
So if you're doing this in your own environment you need to delete this placeholder and type your username.
I'm going to type my username because I'll be uploading this image to my docker hub.
So this is my docker hub username and then press enter and it's going to ask for the corresponding password to this username.
So I'm going to paste in my password if you're logging into your docker hub you should use your password.
Once you've pasted in the password go ahead and press enter and that will log you in to docker hub.
Now you don't have to worry about the security message because whilst your docker hub password is going to be stored on the EC2 instance shortly we're going to terminate this instance which will remove all traces of this password from this machine.
Okay so again we're going to upload our docker image to docker hub so let's run this command again and you'll see because we're just using the docker images command we can see the base image as well as our image.
So we can see red hat UBI 8.
We want the container of cats latest though so what you need to do is copy down the image ID of the container of cats image.
So this is the top line in my case container of cats latest and then the image ID.
So then we need to run this command so docker space tag and then the image ID that you've just copied into your clipboard and then a space and then your docker hub username.
In my case it's actrl with 1L if you're following along you need to use your own username and then forward slash and then the name of the image that you want this to be stored as on docker hub so I'm going to use container of cats.
So that's the command you need to use so docker tag and then your image ID for container of cats and then your username forward slash container of cats and press enter and that's everything we need to do to prepare to upload this image to docker hub.
So the last command that we need to run is the command to actually upload the image to docker hub and that command is docker space push so we're going to push the image to docker hub then we need to specify the docker hub username so again this is my username but if you're doing this in your environment it needs to be your username and then forward slash and then the image name in my case container of cats and then colon latest and once you've got all that go ahead and press enter and that's going to push the docker image that you've just created up to your docker hub account and once it's up there it means that we can deploy from that docker image to other EC2 instances and even ECS and we're going to do that in a later demo in this section of the course.
Now that's everything that you need to do in this demo lesson you've essentially installed and configured the docker engine you've used a docker file to create a docker image from some local assets you've tested that docker image by running a container using that image and then you've uploaded that image to docker hub and as I mentioned before we're going to use that in a future demo lesson in this section of the course.
Now the only thing that remains to do is to clear up the infrastructure that we've used in this demo lesson so go ahead and close down all of these extra tabs and go back to the cloud formation console this is the stack that's been created by the one click deployment link so all you need to do is select this stack it should be called EC2 docker and then click on delete and confirm that deletion and that will return the account into the same state as it was at the start of this demo lesson.
Now that is everything you need to do in this demo lesson I hope it's been useful and I hope you've enjoyed it so go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this very brief demo lesson, I just want to demonstrate a very specific feature of EC2 known as termination protection.
Now you don't have to follow along with this in your own environment, but if you are, you should still have the infrastructure created from the previous demo lesson.
And also if you are following along, you need to be logged in as the I am admin user to the general AWS account.
So the management account of the organization and have the Northern Virginia region selected.
Now again, this is going to be very brief.
So it's probably not worth doing in your own environment unless you really want to.
Now what I want to demonstrate is termination protection.
So I'm going to go ahead and move to the EC2 console where I still have an EC2 instance running created in the previous demo lesson.
Now normally if I right click on this instance, I'm given the ability to stop the instance, to reboot the instance or to terminate the instance.
And this is assuming that the instance is currently in a running state.
Now if I go to terminate instance, straight away I'm presented with a dialogue where I need to confirm that I want to terminate this instance.
But it's easy to imagine that somebody who's less experienced with AWS can go ahead and terminate that and then click on terminate to confirm the process without giving it much thought.
And that can result in data loss, which isn't ideal.
What you can do to add another layer of protection is to right click on the instance, go to instance settings, and then change termination protection.
If you click that option, you get this dialogue where you can enable termination protection.
So I'm going to do that, I'm going to enable termination protection because this is an essential website for animals for life.
So I'm going to enable it and click on save.
And now that instance is protected against termination.
If I right click on this instance now and go to terminate instance and then click on terminate, I get a dialogue that I'm unable to terminate the instance.
The instance and then the instance ID may not be terminated, modify its disable API termination instance attribute and then try again.
So this instance is now protected against accidental termination.
Now this presents a number of advantages.
One, it protects against accidental termination, but it also adds a specific permission that is required in order to terminate an instance.
So you need the permission to disable this termination protection in addition to the permissions to be able to terminate an instance.
So you have the option of role separation.
You can either require people to have both the permissions to disable termination protection and permissions to terminate, or you can give those permissions to separate groups of people.
So you might have senior administrators who are the only ones allowed to remove this protection, and junior or normal administrators who have the ability to terminate instances, and that essentially establishes a process where a senior administrator is required to disable the protection before instances can be terminated.
It adds another approval step to this process, and it can be really useful in environments which contain business critical EC2 instances.
So you might not have this for development and test environments, but for anything in production, this might be a standard feature.
If you're provisioning instances automatically using cloud formation or other forms of automation, this is something that you can enable in an automated way as instances are launching.
So this is a really useful feature to be aware of.
And for the SysOps exam, it's essential that you understand when and where you'd use this feature.
And for both the SysOps and the developer exams, you should pay attention to this, disable API termination.
You might be required to know which attribute needs to be modified in order to allow terminations.
So really for both of the exams, just make sure that you're aware of exactly how this process works end to end, specifically the error message that you might get if this attribute is enabled and you attempt to terminate an instance.
At this point though, that is everything that I wanted to cover about this feature.
So right click on the instance, go to instance settings, change the termination protection and disable it, and then click on save.
One other feature which I want to introduce quickly, if we right click on the instance, go to instance settings, and then change shutdown behavior, you're able to specify whether an instance should move into a stop state when shut down, or whether you want it to move into a terminate state.
Now logically, the default is stop, but if you are running an environment where you don't want to consider the state of an instance to be valuable, then potentially you might want it to terminate when it shuts down.
You might not want to have an account with lots of stopped instances.
You might want the default behavior to be terminate, but this is a relatively niche feature, and in most cases, you do want the shutdown behavior to be stop rather than terminate, but it's here where you can change that default behavior.
Now at this point, that is everything I wanted to cover.
If you were following along with this in your own environment, you do need to clear up the infrastructure.
So click on the services dropdown, move to cloud formation, select the status checks and protect stack, and then click on delete and confirm that by clicking delete stack.
And once this stack finishes deleting all of the infrastructure that's been used during this demo and the previous one will be cleared from the AWS account.
If you've just been watching, you don't need to worry about any of this process, but at this point, we're done with this demo lesson.
So go ahead, complete the video, and once you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson either you're going to get the experience or you can watch me interacting with an Amazon machine image.
So we created an Amazon machine image or AMI in a previous demo lesson and if you recall it was customized for animals for life.
It had an install of WordPress and it had the Kause application installed and a custom login banner.
Now this is a really simple example of an AMI but I want to step you through some of the options that you have when dealing with AMIs.
So if we go to the EC2 console and if you are following along with this in your own environment do make sure that you're logged in as the IAM admin user of the general AWS account, so the management account of the organization and you have the Northern Virginia region selected.
The reason for being so specific about the region is that AMIs are regional entities so you create an AMI in a particular region.
So if I go and select AMIs under images within the EC2 console I'll see the animals for life AMI that I created in a previous demo lesson.
Now if I go ahead and change the region maybe from Northern Virginia which is US-East-1 to US-East- Ohio which is US-East-2 if I make that change what we'll see is we'll go back to the same area of the console only now we won't see any AMIs that's because an AMI is tied to the region in which it's created.
Every AMI belongs in one region and it has a unique AMI ID.
So let's move back to Northern Virginia.
Now we are able to copy AMIs between regions this allows us to make one AMI and use it for a global infrastructure platform so we can right-click and select copy AMI then select the destination region and then for this example let's say that I did want to copy it to Ohio then I would select that in the drop-down it would allow me to change the name if I wanted or I could keep it the same for description it would show that it's been copied from this AMI ID in this region and then it would have the existing description at the end.
So at this point I'm going to go ahead and click copy AMI and that process has now started so if I close down this dialogue and then change it from US East 1 to US East 2 so select that now we have a pending AMI and this is the AMI that's being copied from the US - East - one region into this region if we go ahead and click on snapshots under elastic block store then we're going to see the snapshot or snapshots which belong to this AMI.
Now depending on how busy AWS is it can take a few minutes for the snapshots to appear on this screen just go ahead and keep refreshing until they appear.
In our case we only have the one which is the boot volume that's used for our custom AMI.
Now the time taken to copy a snapshot between regions depends on many factors what the source and destination region are and the distance between the two the size of the snapshot and the amount of data it contains and it can take anywhere from a few minutes to much much longer so this is not an immediate process.
Once the snapshot copy completes then the AMI copy process will complete and that AMI is then available in the destination region but an important thing that I want to keep stressing throughout this course is that this copied AMI is a completely different AMI.
AMIs are regional don't fall for any exam questions which attempt to have you use one AMI for several regions.
If we're copying this animals for life AMI from one region to another region in effect we're creating two different AMIs.
So take note of this AMI ID in this region and if we switch back to the original source region so US - East - 1 note how this AMI has a different ID so they are different AMIs completely different AMIs you're creating a new one as part of the copy process.
So while the data is going to be the same conceptually they are completely separate objects and that's critical for you to understand both for production usage and when answering any exam questions.
Now while that's copying I want to demonstrate the other important thing which I wanted to show you in this demo lesson and that's permissions of AMIs.
So if I right-click on this AMI and edit AMI permissions by default an AMI is private.
Being private means that it's only accessible within the AWS account which has created the AMI and so only identities within that account that you grant permissions are able to access it and use it.
Now you can change the permission of the AMI you could set it to be public and if you set it to public it means that any AWS account can access this AMI and so you need to be really careful if you select this option because you don't want any sensitive information contained in that snapshot to be leaked to external AWS accounts.
A much safer way is if you do want to share the AMI with anyone else then you can select private but explicitly add other AWS accounts to be able to interact with this AMI.
So I could click in this box and then for example if I clicked on services and I just moved to the AWS organization service I'll open that in a new tab and let's say that I chose to share this AMI with my production account so I selected my production account ID and then I could add this into this box which would grant my production AWS account the ability to access this AMI.
Now no tell there's also this checkbox and this adds create volume permissions to the snapshots associated with this AMI so this is something that you need to keep in mind.
Generally if you are sharing an AMI to another account inside your organization then you can afford to be relatively liberal with permissions so generally if you're sharing this internally I would definitely check this box and that gives full permissions on the AMI as well as the snapshots so that anyone can create volumes from those snapshots as well as accessing the AMI.
So these are all things that you need to consider.
Generally it's much preferred to explicitly grant an AWS account permissions on an AMI rather than making that AMI public.
If you do make it public you need to be really sure that you haven't leaked any sensitive information, specifically access keys.
While you do need to be careful of that as well if you're explicitly sharing it with accounts, generally if you're sharing it with accounts then you're going to be sharing it with trusted entities.
You need to be very very careful if ever you're using this public option and I'll make sure I include a link attached to this lesson which steps through all of the best practice steps that you need to follow if you're sharing an AMI publicly.
There are a number of really common steps that you can use to minimize lots of common security issues and that's something you should definitely do if you're sharing an AMI.
Now if you want to do you could also share an AMI with an organizational unit or organization and you can do that using this option.
This makes it easier if you want to share an AMI with all AWS accounts within your organization.
At this point though I'm not going to do that we don't need to do that in this demo.
What we're going to do now though is move back to US-East-2.
That's everything I wanted to cover in this demo lesson.
Now this AMI is available we can right click and select D register and move back to US-East-1 and now that we've done this demo lesson we can do the same process with this AMI.
So we can right click select D register and that will remove that AMI.
Click on snapshots this is the snapshot created by this AMI so we need to delete this as well right click delete that snapshot confirm that and we'll need to do the same process in the region that we copied the AMI and the snapshots to.
So select US-East-2 it should be the only snapshot in the region make sure it is the correct one right click delete confirm that deletion and now you've cleared up all of the extra things created within this demo lesson.
Now that's everything that I wanted to cover I just wanted to give you an overview of how to work with AMIs from the console UI from a copying and sharing perspective.
Go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So the first step is to shut down this instance.
So we don't want to create an AMI from a running instance because that can cause consistency issues.
So we're going to close down this tab.
We're going to return to instances, right-click, and we're going to stop the instance.
We need to acknowledge this and then we need to wait for the instance to change into the stopped state.
It will start with stopping.
We'll need to refresh it a few times.
There we can see it's now in a stopped state and to create the AMI, we need to right-click on that instance, go down to Image and Templates, and select Create Image.
So this is going to create an AMI.
And first we need to give the AMI a name.
So let's go ahead and use Animals for Life template WordPress.
And we'll use the same for Description.
Now what this process is going to do is it's going to create a snapshot of any of the EBS volumes, which this instance is using.
It's going to create a block device mapping, which maps those snapshots onto a particular device ID.
And it's going to use the same device ID as this instance is using.
So it's going to set up the storage in the same way.
It's going to record that storage inside the AMI so that it's identical to the instance we're creating the AMI from.
So you'll see here that it's using EBS.
It's got the original device ID.
The volume type is set to the same as the volume that our instance is using, and the size is set to 8.
Now you can adjust the size during this process as well as being able to add volumes.
But generally when you're creating an AMI, you're creating the AMI in the same configuration as this original instance.
Now I don't recommend creating an AMI from a running instance because it can cause consistency issues.
If you create an AMI from a running instance, it's possible that it will need to perform an instance reboot.
You can force that not to occur, so create an AMI without rebooting.
But again, that's even less ideal.
The most optimal way for creating an AMI is to stop the instance and then create the AMI from that stopped instance, which will have fully consistent storage.
So now that that's set, just scroll down to the bottom and go ahead and click on Create Image.
Now that process will take some time.
If we just scroll down, look under Elastic Block Store and click on Snapshots.
You'll see that initially it's creating a snapshot of the boot volume of our original EC2 instance.
So that's the first step.
So in creating the AMI, what needs to happen is a snapshot of any of the EBS volumes attached to that EC2 instance.
So that needs to complete first.
Initially it's going to be an appending state.
We'll need to give that a few moments to complete.
If we move to AMIs, we'll see that the AMI is also creating it too.
It is in appending state and it's waiting for that snapshot to complete.
Now creating a snapshot is storing a full copy of any of the data on the original EBS volume.
And the time taken to create a snapshot can vary.
The initial snapshot always takes much longer because it has to take that full copy of data.
And obviously depending on the size of the original volume and how much data is being used, will influence how long a snapshot takes to create.
So the more data, the larger the volume, the longer the snapshot will take.
After a few more refreshes, the snapshot moves into a completed status and if we move across to AMIs under images, after a few moments this too will change away from appending status.
So let's just refresh it.
After a few moments, the AMI is now also in an available state and we're good to be able to use this to launch additional EC2 instances.
So just to summarize, we've launched the original EC2 instance, we've downloaded, installed and configured WordPress, configured that custom banner.
We've shut down the EC2 instance and generated an AMI from that instance.
And now we have this AMI in a state where we can use it to create additional instances.
So we're going to do that.
We're going to launch an additional instance using this AMI.
While we're doing this, I want you to consider exactly how much quicker this process now is.
So what I'm going to do is to launch an EC2 instance from this AMI and note that this instance will have all of the configuration that we had to do manually, automatically included.
So right click on this AMI and select launch.
Now this will step you through the launch process for an EC2 instance.
You won't have to select an AMI because obviously you are now explicitly using the one that you've just created.
You'll be asked to select all of the normal configuration options.
So first let's put a name for this instance.
So we'll use the name "instance" from AMI.
Then we'll scroll down.
As I mentioned moments ago, we don't have to specify an AMI because we're explicitly launching this instance from an AMI.
Scroll down.
You'll need to specify an instance type just as normal.
We'll use a free tier eligible instance.
This is likely to be T2 or T3.micro.
Below that, go ahead and click and select Proceed without a key pair not recommended.
Scroll down.
We'll need to enter some networking settings.
So click on Edit next to Network Settings.
Click in VPC and select A4L-VPC1.
Click in Subnet and make sure that SN-Web-A is selected.
Make sure the box is below a both set to enable for the auto assign IP settings.
Under Firewall, click on Select Existing Security Group.
Click in the Security Groups drop down and select AMI-Demo-Instance Security Group.
And that will have some random at the end.
That's absolutely fine.
Select that.
Scroll down.
And notice that the storage is configured exactly the same as the instance which you generated this AMI from.
Everything else looks good.
So we can go ahead and click on Launch Instance.
So this is launching an instance using our custom created AMI.
So let's close down this dialog and we'll see the instance initially in a pending state.
Remember, this is launching from our custom AMI.
So it won't just have the base Amazon Linux 2 operating system.
Now it's going to have that base operating system plus all of the custom configuration that we did before creating the AMI.
So rather than having to perform that same WordPress download installation configuration and the banner configuration each and every time, now we've baked that in to the AMI.
So now when we launch one instance, 10 instances, or 100 instances from this AMI, all of them are going to have this configuration baked in.
So let's give this a few minutes to launch.
Once it's launched, we'll select it, right click, select Connect, and then connect into it using EC2, Instance Connect.
Now one thing you will need to change because we're using a custom AMI, AWS can't necessarily detect the correct username to use.
And so you might see sometimes it says root.
Just go ahead and change this to EC2-user and then go ahead and click Connect.
And if everything goes well, you'll be connected into the instance and you'll see our custom Cowsay banner.
So all that configuration is now baked in and it's automatically included whenever we use that AMI to launch an instance.
If we go back to the AWS console and select instances, make sure we still have the instance from AMI selected and then locate its public IP version for address.
Don't use this link because that will use HTTPS instead, copy the IP address into your clipboard and open that in a new tab.
Again, all being well, you should see the WordPress installation dialogue and that's because we've baked in the installation and the configuration into this AMI.
So we've massively reduced the ongoing efforts required to launch an animals for life standard build configuration.
If we use this AMI to launch hundreds or thousands of instances each and every time we're saving all the time and the effort required to perform this configuration and using an AMI is just one way that we can automate the build process of EC2 instances within AWS.
And over the remainder of the course, I'm going to be demonstrating the other ways that you can use as well as comparing and contrasting the advantages and disadvantages of each of those methods.
Now that's everything that I wanted to cover in this demo lesson.
You've learned how to create an AMI and how to use it to save significant effort on an ongoing basis.
So let's clear up all of the infrastructure that we've used in this lesson.
So move back to the AWS console, close down this tab, go back to instances, and we need to manually terminate the instance that we created from our custom AMI.
So right click and then go to terminate instance.
You'll need to confirm that.
That will start the process of termination.
Now we're not going to delete the AMI or snapshots because there's a demo coming up later in this section of the course where you're going to get the experience of copying and sharing an AMI between AWS regions.
So we're going to need to leave this in place.
So we're not going to delete the AMI or the snapshots created within this lesson.
Verify that that instance has been terminated and once it has, click on services, go to cloud formation, select the AMI demo stack, select delete and then confirm that deletion.
And that will remove all of the infrastructure that we've created within this demo lesson.
And at this point, that's everything that I wanted you to do in this demo.
So go ahead, complete this video.
And when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you'll be creating an AMI from a pre-configured EC2 instance.
So you'll be provisioning an EC2 instance, configuring it with a popular web application stack and then creating an AMI of that pre-configured web application.
Now you know in the previous demo where I said that you would be implementing the WordPress manual install once?
Well I might have misled you slightly but this will be the last manual install of WordPress in the course, I promise.
What we're going to do together in this demo lesson is create an Amazon Linux AMI for the animals for life business but one which includes some custom configuration and an install of WordPress ready and waiting to be initially configured.
So this is a fairly common use case so let's jump in and get started.
Now in order to perform this demo you're going to need some infrastructure, make sure you're logged into the general AWS account, so the management account of the organization and as always make sure that you have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment link, go ahead and click that link.
This will open the quick create stack screen, it should automatically be populated with the AMI demo as the stack name, just scroll down to the bottom, check this capabilities acknowledgement box and then click on create stack.
We're going to need this stack to be in a create complete state so go ahead and pause the video and we can resume once the stack moves into create complete.
Okay so that stacks now moved into a create complete state, we're good to continue with the demo.
Now you're going to be using some command line commands within an EC2 instance as part of creating an Amazon machine image so also attached to this lesson is the lessons command document which contains all of those commands so go ahead and open that document.
Now you might recognize these as the same commands that you used when you were performing a manual WordPress installation and that's the case we're running the same manual installation process as part of setting up our animals for life AMI so you're going to need all of these commands but as you've already experienced them in the previous demo lesson I'm going to run through them a lot quicker in this demo lesson so go back to the AWS console and we need to move to the EC2 area of the console so click on the services drop down, type EC2 into this search box and then open that in a new tab.
Once you there go ahead and click on running instances, close down any dialogues about any console changes we want to maximize the amount of screen space that we have, we're going to connect to this A4L public EC2 instance this is the instance that we're going to use to create our AMI so we're going to set the instance up manually how we want it to be and then we're going to use it to generate an AMI so we need to connect to this instance so right click select connect we're going to use EC2 instance connect to do the work within our browser so make sure the username is EC2-user and then connect to this instance then once connected we're going to run through the commands to install WordPress really quickly we're going to start again by setting the variables that will use throughout the installation so you can just go ahead and copy and paste those straight in and press enter now we're going to run through all of the next set of commands really quickly because you use them in the previous demo lesson so first we're going to go ahead and install the MariaDB server Apache and the Wget utility while that's installing copy all of the commands from step 3 so these are commands which enable and start Apache and MariaDB go ahead and paste all of those four in and press enter so now Apache and MariaDB are both set to start when the instance boots as well as being set to currently started I'll just clear the screen to make this easier to see next we're going to set the DB root password again that's this command using the contents of the variable that you set at the start next we download WordPress once it's downloaded we move into the web root folder we extract the download we copy the files from within the WordPress folder that we've just extracted into the current folder which is the web root once we've done that we remove the WordPress folder itself and then we tidy up by deleting the download I'm going to clear the screen we copy the template configuration file into its final file name so wp-config.php then we're going to replace the placeholders in that file we're going to start with the database name using the variable that you set at the start next we're going to use the database user which you also set at the start and finally the database password and then we're going to set the ownership on all of these files to be the Apache user and the Apache group clear the screen next we need to create the DB setup script that are demonstrated in the previous demo so we need to run a collection of commands the first to enter the create database command the next one to enter the create user command and set that password the next one to grant permissions on the database to that user then flush the permissions then we need to run that script using the MySQL command line interface that runs all of those commands and performs all of those operations and then we tidy up by deleting that file now at this point we've done the exact same process that we did in the previous demo we've installed and set up WordPress and if everything's working okay we can go back to the AWS console click on instances select the running a4l-public ec2 instance copy down its IP address again make sure you copy that down don't click this link and then open that in a new tab if everything's working as expected you should see the WordPress installation dialogue now this time because we're creating an AMI we don't want to perform the installation we want to make sure that when anyone uses this AMI they're also greeted with this installation so we're going to leave this at this point we're not going to perform the installation instead we're going to go back to the ec2 instance now because this ec2 instance is for the animals for life business we want to customize it and make sure that everybody knows that this is an animals for life ec2 instance now to do that we're going to install an animal themed utility called cow say I'm going to clear the screen to make it easier to see and then just to demonstrate exactly what cow say does I'm going to run a cow say oh hi and if all goes well we see a cow using ASCII art saying the oh hi message that we just typed so we're going to use this to create a message of the day welcome when anyone connects to this ec2 instance to do that we're going to create a file inside the configuration folder of this ec2 instance so we're going to use shudu nano and we're going to create this file so forward slash etc forward slash update hyphen motd dot d forward slash 40 hyphen cow so we're going to create that file this is the file that's going to be used to generate the output when anyone logs in to this ec2 instance so we're going to copy in these two lines and then press enter so this means when anyone logs into the ec2 instance they're going to get an animal themed welcome so use control o to save that file and control x to exit clear the screen to make it easier to see we're going to make sure that file that we've just edited has the correct permissions then we're going to force an update of the message of the day so this is going to be what's displayed when anyone logs into this instance and then finally now that we've completed this configuration we're going to reboot this ec2 instance so we're going to use this command to reboot it and just to illustrate how this works I'm going to close down that tab and return to the ec2 console give this a few moments to restart that should have rebooted by now so we're going to select it right click go to connect again use ec2 instance connect assuming everything's working now when we connect to the instance we'll see an animal themed login banner so this is just a nice way that we can ensure that anyone logging into this instance understands that a he uses the Amazon Linux 2 AMI and be that it belongs to animals for life so we've created this instance using the Amazon Linux 2 AMI we've performed the WordPress installation and initial configuration we've customized the banner and now we're going to use this as our template instance to create our AMI that can then be used to launch other instances okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part two
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So this is the folder containing the WordPress installation files.
Now there's one particular file that's really important, and that's the configuration file.
So there's a file called WP-config-sample, and this is actually the file that contains a template of the configuration items for WordPress.
So what we need to do is to take this template and change the file name to be the proper file name, so wp-config.php.
So we're going to create a copy of this file with the correct name.
And to do that, we run this command.
So we're copying the template or the sample file to its real file name, so wp-config.php.
And this is the name that WordPress expects when it initially loads its configuration information.
So run that command, and that now means that we have a live config file.
Now this command isn't in the instructions, but if I just take a moment to open up this file, you don't need to do this.
I'm just demonstrating what's in this file for your benefit.
But if I run a sudo nano, and then wp, and then hyphen-config, and then php, this is how the file looks.
So this has got all the configuration information in.
So it stores the database name, the database user, the database host, and lots of other information.
Now notice how it has some placeholders.
So this is where we would need to replace the placeholders with the actual configuration information.
So the database name itself, the host name, the database username, the database password, all that information would need to be replaced.
Now we're not going to type this in manually, so I'm going to control X to exit out of this, and then clear the screen again to make it easy to see.
We're going to use the Linux utility sed, or S-E-D.
And this is a utility which can perform a search and replace within a text file.
It's actually much more complex and capable than that.
It can perform many different manipulation operations.
But for this demonstration, we're going to use it as a simple search and replace.
Now we're going to do this a number of times.
First, we're going to run this command, which is going to replace this placeholder.
Remember, this is one of the placeholders inside the configuration file that I've just demonstrated, wp-config.
We're going to replace the placeholder here with the contents of the variable name, dbname, that we set at the start of this demo.
So this is going to replace the placeholder with our actual database name.
So I'm going to enter that so you can do the same.
We're going to run the sed command again, but this time it's going to replace the username placeholder with the dbuser variable that we set at the start of this demo.
So use that command as well.
And then lastly, it will do the same for the database password.
So type or copy and paste this command and press enter.
And that now means that this wp-config has the actual configuration information inside.
And just to demonstrate that, you don't need to do this part.
I'll just do it to demonstrate.
If I edit this file again, you'll see that all of these placeholders have actually been replaced with actual values.
So I'm going to control X out of that and then clear the screen.
And that concludes the configuration for the WordPress application.
So now it's ready.
Now it knows how to communicate with the database.
What we need to do to finish off the configuration though is just to make sure that the web server has access to all of the files within this folder.
And to do that, we use this command.
So we're making sure that we use the shown command or chown and set the ownership of all of the files in this folder and any subfolders to be the Apache user and the Apache group.
And the Apache user and Apache group belong to the web server.
So this just makes sure that the web server is able to access and control all of the files in the web root folder.
So run that command and press enter.
And that concludes the installation part of the WordPress application.
There's one final thing that we need to do and that's to create the database that WordPress will use.
So I'm going to clear the screen to make it easy to see.
Now what we're going to do in order to configure the database is we're going to make a database setup script.
We're going to put this script inside the forward slash TMP folder and we're going to call it DB.setup.
So what we need to do is enter the commands into this file that will create the database.
After the database is created, it needs to create a database user and then it needs to grant that user permissions on that database.
Now again, instead of manually entering this, we're going to use those variable names that were created at the start of the demo.
So we're going to run a number of commands.
These are all in the lessons commands document.
The first one is this.
So this echoes this text and because it has a variable name in, this variable name will be replaced by the actual contents of the variable.
Then it's going to take this text with the replacement of the contents of this variable and it's going to enter that into this file.
So forward slash TMP, forward slash DB setup.
So run that and that command is going to create the WordPress database.
Then we're going to use this command and this is the same so it echoes this text but it replaces these variable names with the contents of the variables.
This is going to create our WordPress database user.
It's going to set its password and then it's going to append this text to the DB setup file that we're creating.
Now all of these are actually database commands that we're going to execute within the MariaDB database.
So enter that to add that line to DB.setup.
Then we have another line which uses the same architecture as the ones above it.
It echoes the text.
It replaces these variable names with the contents and then outputs that to this DB.setup file and this command grants our database user permissions to our WordPress database.
And then the last command is this one which just flushes the privileges and again we're going to add this to our DB.setup script.
So now I'm just going to cat the contents of this file so you can just see exactly what it looks like.
So cat and then space forward slash TMP, forward slash DB.setup.
So as you'll see it's replaced all of these variable names with the actual contents.
So this is what the contents of this script actually looks like.
So these are commands which will be run by the MariaDB database platform.
To run those commands we use this.
So this is the MySQL command line interface.
So we're using MySQL to connect to the MariaDB database server.
We're using the username of root.
We're passing in the password and then using the contents of the DB root password variable.
And then once we authenticate the database we're passing in the contents of our DB.setup script.
And so this means that all of the lines of our DB.setup script will be run by the MariaDB database and this will create the WordPress database, the WordPress user and configure all of the required permissions.
So go ahead and press enter.
That command is run by the MariaDB platform and that means that our WordPress database has been successfully configured.
And then lastly just to keep things secure because we don't want to leave files laying around on the file system with authentication information inside.
We're just going to run this command to delete this DB.setup file.
Okay, so that concludes the setup process for WordPress.
It's been a fairly long intensive process but that now means that we have an installation of WordPress on this EC2 instance, a database which has been installed and configured.
So now what we can do is to go back to the AWS console, click on instances.
We need to select the A4L-PublicEC2 and then we need to locate its IP address.
Now make sure that you don't use this open address link because this will attempt to open the IP address using HTTPS and we don't have that configured on this WordPress instance.
Instead, just copy the IP address into your clipboard and then open that in a new tab.
If everything's successful, you should see the WordPress installation dialog and just to verify this is working successfully, let's follow this process through.
So pick English, United States for the language.
For the blog title, just put all the cats and then admin as the username.
You can accept the default strong password.
Just copy that into your clipboard so we can use it to log in in a second and then just go ahead and enter your email.
It doesn't have to be a correct one.
So I normally use test@test.com and then go ahead and click on install WordPress.
You should see a success dialog.
Go ahead and click on login.
Username will be admin, the password that you just copied into your clipboard and then click on login.
And there you go.
We've got a working WordPress installation.
We're not going to configure it in any detail but if you want to just check out that it works properly, go ahead and click on this all the cats at the top and then visit site and you'll be able to see a generic WordPress blog.
And that means you've completed the installation of the WordPress application and the database using a monolithic architecture on a single EC2 instance.
So this has been a slow process.
It's been manual and it's a process which is wide open for mistakes to be made at every point throughout that process.
Can you imagine doing this twice?
What about 10 times?
What about a hundred times?
It gets pretty annoying pretty quickly.
In reality, this is never done manually.
We use automation or infrastructure as code systems such as cloud formation.
And as we move through the course, you're going to get experience of using all of these different methods.
Now that we're close to finishing up the basics of VPC and EC2 within the course, things will start to get much more efficient quickly because I'm going to start showing you how to use many of the automation and infrastructure as code services within AWS.
And these are really awesome to use.
And you'll see just how much power is granted to an architect, a developer, or an engineer by using these services.
For now though, that is the end of this demo lesson.
Now what we're going to do is to clear up our account.
So we need to go ahead and clear all of this infrastructure that we've used throughout this demo lesson.
To do that, just move back to the AWS console.
If you still have the cloud formation tab open and move back to that tab, otherwise click on services and then click on cloud formation.
If you don't see it anywhere, you can use this box to search for it, select the word, press stack, select delete, and then confirm that deletion.
And that will delete the stack, clear up all of the infrastructure that we've used throughout this demo lesson and the account will now be in the same state as it was at the start of this lesson.
So from this point onward in the course, we're going to start using automation.
Now there is a lesson coming up in a little while in this section of the course, where you're going to create an Amazon machine image which is going to contain a pre-baked copy of the WordPress application.
So as part of that lesson, you are going to be required to perform one more manual installation of WordPress, but that's going to be part of automating the installation.
So you'll start to get some experience of how to actually perform automated installations and how to design architectures which have WordPress as a component.
At this point though, that's everything I wanted to cover.
So go ahead, complete this video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson we're going to be doing something which I really hate doing and that's using WordPress in a course as an example.
Joking aside though WordPress is used in a lot of courses as a very simple example of an application stack.
The problem is that most courses don't take this any further.
But in this course I want to use it as one example of how an application stack can be evolved to take advantage of AWS products and services.
What we're going to be using WordPress for in this demo is to give you experience of how a manual installation of a typical application stack works in EC2.
We're going to be doing this so you can get the experience of how not to do things.
My personal belief is that to fully understand the advantages that automation features within AWS provide, you need to understand what a manual installation is like and what problems you can experience doing that manual installation.
As we move through the course we can compare this to various different automated ways of installing software within AWS.
So you're going to get the experience of bad practices, good practices and the experience to be able to compare and contrast between the two.
By the end of this demonstration you're going to have a working WordPress site but it won't have any high availability because it's running on a single EC2 instance.
It's going to be architecturally monolithic with everything running on the one single instance.
In this case that means both the application and the database.
The design is fairly straightforward.
It's just the Animals for Life VPC.
We're going to be deploying the WordPress application into a single subnet, the WebA public subnet.
So this subnet is going to have a single EC2 instance deployed into it and then you're going to be doing a manual install onto this instance and the end result is a working WordPress installation.
At this point it's time to get started and implement this architecture.
So let's go ahead and switch over to our AWS console.
To get started with this demo lesson you're going to need to do a few preparation steps.
First just make sure that you're logged in to the general AWS account, so the management account of the organization and as always make sure you have the Northern Virginia region selected.
Now attached to this lesson is a one-click deployment for the base infrastructure that we're going to use.
So go ahead and open the one-click deployment link that's attached to this lesson.
That link is going to take you to the Quick Create Stack screen.
Everything should be pre-populated.
The stack name should be WordPress.
All you need to do is scroll down towards the bottom, check this capabilities box and then click on Create Stack.
And this stack is going to need to be in a Create Complete state before we move on with the demo lesson.
So go ahead and pause this video, wait for the stack to change to Create Complete and then we're good to continue.
Also attached to this lesson is a Lessons Command document which lists all of the commands that you'll be using within the EC2 instance throughout this demo lesson.
So go ahead and open that as well.
So that should look something like this and these are all of the commands that we're going to be using.
So these are the commands that perform a manual WordPress installation.
Now that that stack's completed and we've got the Lesson Commands document open, the next step is to move across to the EC2 console because we're going to actually install WordPress manually.
So click on the Services drop-down and then locate EC2 in this All Services part of the screen.
If you've recently visited it, it should be in the Recently Visited section under Favorites or you can go ahead and type EC2 in the search box and then open that in a new tab.
And then click on Instances running and you should see one single instance which is called A4L-PublicEC2.
Go ahead and right-click on this instance.
This is the instance we'll be installing WordPress within.
So right-click, select Connect.
We're going to be using our browser to connect to this instance so we'll be using Instance Connect just verify that the username is EC2-user and then go ahead and connect to this instance.
Now again, I fully understand that a manual installation of WordPress might seem like a waste of time but I genuinely believe that you need to understand all the problems that come from manually installing software in order to understand the benefits which automation provides.
It's not just about saving time and effort.
It's also about error reduction and the ability to keep things consistent.
Now I always like to start my installations or my scripts by setting variables which will store the configuration values that everything from that point forward will use.
So we're going to create four variables.
One for the database name, one for the database user, one for the database password and then one for the root or admin password of the database server.
So let's start off by using the pre-populated values from the Lessened Commands documents.
So that's all of those variables set and we can confirm that those are working by typing echo and then a space and then a dollar and then the name of one of those variables.
So for example, dbname and press Enter and that will show us the value stored within that variable.
So now we can use these later points of the installation.
So at this point I'm going to clear the screen to keep it easy to see and stage two at this installation process is to install some system software.
So there are a few things that we need to install in order to allow a WordPress installation.
We'll install those using the DNF package manager.
We need to give it admin privileges which is why we use shudu and then the packages that we're going to install are the database server which is Maria db-server the Apache web server which is HTTPD and then a utility called Wget which we're going to use to download further components of the installation.
So go ahead and type or copy and paste that command and press Enter and that installation process will take a few moments and it will go through installing that software and any of the prerequisites.
They're done so I'll clear the screen to keep this easy to read.
Now that all those packages are installed we need to start both the web server and the database server and ensure that both of them are started if ever the machine is restarted.
So to do that we need to enable and start those services.
So enabling and starting means that both of the services are both started right now and they'll start if the machine reboots.
So first we'll use this command.
So we're using admin privileges again, systemctl which allows us to start and stop system processes and then we use enable and then HTTPD which is the web server.
So type and press enter and that ensures that the web server is enabled.
We need to run the same command again but this time specifying MariaDB to ensure that the database server is enabled.
So type or copy and paste and press enter.
So that means both of those processes will start if ever the instance is rebooted and now we need to manually start both of those so they're running and we can interact with them.
So we need to use the same structure of command but instead of enable we need to start both of these processes.
So first the web server and then the database server.
So that means the CC2 instance now has a running web and database server both of which are required for WordPress.
So I'll clear the screen to keep this easy to read.
Next we're going to move to stage 4 and stage 4 is that we need to set the root password of the database server.
So this is the username and password that will be used to perform all of the initial configuration of the database server.
Now we're going to use this command and you'll note that for password we're actually specifying one of the variables that we configured at the start of this demo.
So we're using the DB root password variable that we configured right at the start.
So go ahead and copy and paste or type that in and press enter and that sets the password for the root user of this database platform.
The next step which is step 5 is to install the WordPress application files.
Now to do that we need to install these files inside what's known as the web root.
So whenever you browse to a web server either using an IP address or a DNS name if you don't specify a path so if you just use the server name for example netflix.com then it loads those initial files from a folder known as the web root.
Now on this particular server the web root is stored in /varr/www/html so we need to download WordPress into that folder.
Now we're going to use this command Wget and that's one of the packages that we installed at the start of this lesson.
So we're giving it admin privileges and we're using Wget to download latest.tar.gz from wordpress.org and then we're putting it inside this web root.
So /varr/www/html.
So go ahead and copy and paste or type that in and press enter.
That'll take a few moments depending on the speed of the WordPress servers and that will store latest.tar.gz in that web root folder.
Next we need to move into that folder so cd space /varr/www/html and press enter.
We need to use a Linux utility called tar to extract that file.
So sudo and then tar and then the command line options -zxvf and then the name of the file so latest.tar.gz So copy and paste or type that in and press enter and that will extract the WordPress download into this folder.
So now if we do an ls -la you'll see that we have a WordPress folder and inside that folder are all of the application files.
Now we actually don't want them inside a WordPress folder.
We want them directly inside the web root.
So the next thing we're going to do is this command and this is going to copy all of the files from inside this WordPress folder to . and . represents the current folder.
So it's going to copy everything inside WordPress into the current working directory which is the web root directory.
So enter that and that copies all of those files.
And now if we do another listing you'll see that we have all of the WordPress application files inside the web root.
And then lastly for the installation part we need to tidy up the mess that we've made.
So we need to delete this WordPress folder and the download file that we just created.
So to do that we'll run an rm -r and then WordPress to delete that folder.
And then we'll delete the download with sudo rm and then a space and then the name of the file.
So latest.tar.gz.
And that means that we have a nice clean folder.
So I'll clear the screen to make it easy to see.
And then I'll just do another listing.
Okay so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead complete the video and when you're ready join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video we're going to interact with instant store volumes.
Now this part of the demo does come at a cost.
This isn't inside the free tier because we're going to be launching some instances which are fairly large and are not included in the free tier.
The demo has a cost of approximately 13 cents per hour and so you should only do this part of the demo if you're willing to accept that cost.
If you don't want to accept those costs then you can go ahead and watch me perform these within my test environment.
So to do this we're going to go ahead and click on instances and we're going to launch an instance manually.
So I'm going to click on launch instances.
We're going to name the instance, Instance Store Test so put that in the name box.
Then scroll down, pick Amazon Linux, make sure Amazon Linux 2023 is selected and the architecture needs to be 64 bit x86.
Scroll down and then in the instance type box click and we need to find a different type of instance.
This is going to be one that supports instance store volumes.
So scroll down and we're looking for m5dn.large.
This is a type of instance which includes one instance store volume.
So select that then scroll down a little bit more and under key pair click in the box and select proceed without a key pair not recommended.
Scroll down again and under network settings click on edit.
Click in the VPC drop down and select a4l-vpc1.
Under subnet make sure sn-web-a is selected.
Make sure enabled is selected for both of the auto assign public IP drop downs.
Then we're going to select an existing security group click the drop down select the EBS demo instance security group.
It will have some random after it but that's okay.
Then scroll down and under storage we're going to leave all of the defaults.
What you are able to do though is to click on show details next to instance store volumes.
This will show you the instance store volumes which are included with this instance.
You can see that we have one instance store volume it's 75 GB in size and it has a slightly different device name.
So dev nvme0n1.
Now all of that looks good so we're just going to go ahead and click on launch instance.
Then click on view all instances and initially it will be an appending state and eventually it will move into a running state.
Then we should probably wait for the status check column to change from initializing to 2 out of 2.
Go ahead and pause the video and wait for this status check to change to be fully green.
It should show 2 out of 2 status checks.
That's now in a running state with 2 out of 2 checks so we can go ahead and connect to this instance.
Before we do though just go ahead and select the instance and just note the instances public IP version 4 address.
Now this address is really useful because it will change if the EC2 instance moves between EC2 hosts.
So it's a really easy way that we can verify whether this instance has moved between EC2 hosts.
So just go ahead and note down the IP address of the instance that you have if you're performing this in your own environment.
We're going to go ahead and connect to this instance though so right click, select connect, we'll be choosing instance connect, go ahead and connect to the instance.
Now many of these commands that we'll be using should by now be familiar.
Just refer back to the lessons command document if you're unsure because we'll be using all of the same commands.
First we need to list all of the block devices which are attached to this instance and we can do that with LSBLK.
This time it looks a little bit different because we're using instance store rather than EBS additional volumes.
So in this particular case I want you to look for the 8G volume so this is the root volume.
This represents the boot or root volume of the instance.
Remember that this particular instance type came with a 75GB instance store volume so we can easily identify it's this one.
Now to check that we can verify whether there's a file system on this instance store volume.
If we run this command, so the same command we've used previously so shudu file -s and then the id of this volume so dev nvme1n1, you'll see it reports data.
And if you recall from the previous parts of this demo series this indicates that there isn't a file system on this volume.
We're going to create one and to do that we use this command again it's the same command that we've used previously just with the new volume id.
So press enter to create a file system on this raw block device this instance store volume and then we can run this command again to verify that it now has a file system.
To mount it we can follow the same process that we did in the earlier stages of this demo series.
We'll need to create a directory for this volume to be mounted into this time we'll call it forward slash instance store.
So create that folder and then we're going to mount this device into that folder so shudu mount then the device id and then the mount point or the folder that we've previously created.
So press enter and that means that this block device this instance store volume is now mounted into this folder.
And if we run a df space -k and press enter you can see that it's now mounted.
Now we're going to move into that folder by typing cd space forward slash instance store and to keep things efficient we're going to create a file called instance store dot txt.
And rather than using an editor we'll just use shudu touch and then the name of the file and this will create an empty file.
If we do an LS space -la and press enter you can see that that file exists.
So now that we have this file stored on a file system which is running on this instance store volume let's go ahead and reboot this instance.
Now we need to be careful we're not going to stop and start the instance we're going to restart the instance.
Restarting is different than stop and start.
So to do that we're going to close this tab move back to the ec2 console so click on instances right click on instance store test and select reboot instance and then confirm that.
Note what this IP address is before you initiate the reboot operation and then just give this a few minutes to reboot.
Then right click and select connect.
Using instance connect go ahead and connect back to the instance.
And again if it appears to hang at this point then you can just wait for a few moments and then connect again.
But in this case I've left it long enough and I'm connected back into the instance.
Now once I'm back in the instance if I run a df space -k and press enter note how that file system is not mounted after the reboot.
Now that's fine because we didn't configure the Linux operating system to mount this file system when the instance is restarted.
But what we can do is do an LS BLK again to list the block device.
We can see that it's still there and we can manually mount it back in the same folder as it was before the reboot.
To do that we run this command.
So it's mounting the same volume ID the same device ID into the same folder.
So go ahead and run that command and press enter.
Then if we use cd space forward slash and then instance store press enter and then do an LS space -la we can see that this file is still there.
Now the file is still there because instance store volumes do persist through the restart of an EC2 instance.
Restarting an EC2 instance does not move the instance from one EC2 host to another.
And because instance store volumes are directly attached to an EC2 host this means that the volume is still there after the machine has restarted.
Now we're going to do something different though.
Close this tab down.
Move back to instances.
Again pay special attention to this IP address.
Now we're going to right click and stop the instance.
So go ahead and do that and confirm it if you're doing this in your own environment.
Watch this public IP v4 address really carefully.
We'll need to wait for the instance to move into a stopped state which it has and if we select the instance note how the public IP version for address has been unallocated.
So this instance is now not running on an EC2 host.
Let's right click.
Go to start instance and start it up again.
Only to give that a few moments again.
It'll move into a running state but notice how the public IP version for address has changed.
This is a good indication that the instance has moved from one EC2 host to another.
So let's give this instance a few moments to start up.
And once it has right click, select connect and then go ahead and connect to the instance using instance connect.
Once connected go ahead and run an LS BLK and press enter and you'll see it appears to have the same instance store volume attached to this instance.
It's using the same ID and it's the same size.
But let's go ahead and verify the contents of this device using this command.
So shudu file space -s space and then the device ID of the instance store volume.
For press enter, now note how it shows data.
So even though we created a file system in the previous step after we've stopped and started the instance, it appears this instance store volume has no data.
Now the reason for that is when you restart an EC2 instance, it restarts on the same EC2 host.
But when you stop and start an EC2 instance, which is a distinctly different operation, the EC2 instance moves from one EC2 host to another.
And that means that it has access to completely different instance store volumes than it did on that previous host.
It means that all of the data, so the file system and the test file that we created on the instance store volume, before we stopped and started this instance, all of that is lost.
When you stop and start an EC2 instance or for any other reason, which means the instance moves from one host to another, all of the data is lost.
So instance store volumes are ephemeral.
They're not persistent and you can't rely on them to keep your data safe.
And it's really important that you understand that distinction.
If you're doing the developer or sysop streams, it's also important that you understand the difference between an instance restart, which keeps the same EC2 host, and a stop and start, which moves an instance from one host to another.
The format means you're likely to keep your data, but the latter means you're guaranteed to lose your data when using instance store volumes.
EBS on the other hand, as we've seen, is persistent and any data persists through the lifecycle of an EC2 instance.
Now with that being said, though, that's everything that I wanted to demonstrate within this series of demo lessons.
So let's go ahead and tidy up the infrastructure.
Close down this tab, click on instances.
If you follow this last part of the demo in your own environment, go ahead and right click on the instance store test instance and terminate that instance.
That will delete it along with any associated resources.
We'll need to wait for this instance to move into the terminated state.
So give that a few moments.
Once that's terminated, go ahead and click on services and then move back to the cloud formation console.
You'll see the stack that you created using the one click deploy at the start of this lesson.
Go ahead and select that stack, click on delete and then delete stack.
And that's going to put the account back in the same state as it was at the start of this lesson.
So it will remove all of the resources that have been created.
And at that point, that's the end of this demo series.
So what did you learn?
You learned that EBS volumes are created within one specific availability zone.
EBS volumes can be mounted to instances in that availability zone only and can be moved between instances while retaining their data.
You can create a snapshot from an EBS volume which is stored in S3 and that data is replicated within the region.
And then you can use snapshots to create volumes in different availability zones.
I told you how snapshots can be copied to other AWS regions either as part of data migration or disaster recovery and you learned that EBS is persistent.
You've also seen in this part of the demo series that instant store volumes can be used.
They are included with many instance types but if the instance moves between EC2 hosts so if an instance is stopped and then started or if an EC2 host has hardware problems then that EC2 instance will be moved between hosts and any data on any instant store volumes will be lost.
So that's everything that you needed to know in this demo lesson and you're going to learn much more about EC2 and EBS in other lessons throughout the course.
At this point though, thanks for watching and doing this demo.
I hope it was useful but go ahead complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
We just need to give this a brief moment to perform that reboot.
So just wait a couple of moments and once you have right click again, select Connect.
We're going to use EC2 instance connect again.
Make sure the user's correct and then click on Connect.
Now, if it doesn't immediately connect you to the instance, if it appears to have frozen for a couple of seconds, that's fine.
It just means that the instance hasn't completed its restart.
Wait for a brief while longer and then attempt another connect.
This time you should be connected back to the instance and now we need to verify whether we can still see our volume attached to this instance.
So do a DF space -k and press Enter and you'll note that you can't see the file system.
That's because before we rebooted this instance, we used the mount command to manually mount the file system on our EBS volume into the EBS test folder.
Now that's a manual process.
It means that while we could interact with that before the reboot, it doesn't automatically mount that file system when the instance restarts.
To do that, we need to configure it to auto-mount when the instance starts up.
So to do that, we need to get the unique ID of the EBS volume, which is attached to this instance.
And to get that, we run a shudu space blkid.
Now press Enter and that's going to list the unique identifier of all of the volumes attached to this instance.
You'll see the boot volume listed as devxvda1 and the EBS volume that we've just attached listed as devxvdf.
So we need the unique ID of the volume that we just added.
So that's the one next to xvdf.
So go ahead and select this unique identifier.
You'll need to make sure that you select everything between the speech marks and then copy that into your clipboard.
Next, we need to edit the FSTAB file, which controls which file systems are mounted by default.
So we're going to run a shudu and then space nano, which is our editor, and then a space, and then forward slash ETC, which is the configuration directory on Linux, another forward slash and then FSTAB and press Enter.
And this is the configuration file for which file systems are mounted by our instance.
And we're going to add a similar line.
So first we need to use uuid, which is the unique identifier, and then the equal symbol.
And then we need to paste in that unique ID that we just copied to our clipboard.
Once that's pasted in, press Space.
This is the ID of the EBS volume, so the unique ID.
Next, we need to provide the place where we want that volume to be mounted.
And that's the folder we previously created, which is forward slash EBS test.
Then a space, we need to tell the OS which file system is used, which is xfs, and then a space.
And then we need to give it some options.
You don't need to understand what these do in detail.
We're going to use defaults, all one word, and then a comma, and then no fail.
So once you've entered all of that, press Ctrl+O to save that file, and Enter, and then Ctrl+X to exit.
Now this will be mounted automatically when the instance starts up, but we can force that process by typing shudu space mount space-a.
And this will perform a mount of all of the volumes listed in the FS tab file.
So go ahead and press Enter.
Now if we do a df space-k and press Enter, you'll see that our EBS volume once again is mounted within the EBS test folder.
So I'm going to clear the screen, then I'm going to move into that folder, press Enter, and then do an ls space-la, and you'll see that our amazing test file still exists within this folder.
And that shows that the data on this file system is persistent, and it's available even after we reboot this EC2 instance, and that's different than instance store volumes, which I'll be demonstrating later on.
At this point, we're going to shut down this instance because we won't be needing it anymore.
So close down this tab, click on Instances, right-click on instance one-AZA, and then select Stop Instance.
You'll need to confirm it, refresh that and wait for it to move into a stopped state.
Once it has stopped, go down and click on Volumes, select the EBS test volume, right-click and detach it.
We're going to detach this volume from the instance that we've just stopped.
You'll need to confirm that, and that will begin the process and it will detach that volume from the instance, and this demonstrates how EBS volumes are completely separate from EC2 instances.
You can detach them and then attach them to other instances, keeping the data that's on that volume.
Just keep refreshing.
We need to wait for that to move into an available state, and once it has, we're going to right-click, select Attach Volume, click inside the instance box, and this time, we're going to select instance two-AZA.
It should be the only one listed now in a running state.
So select that and click on Attach.
Just refresh that and wait for that to move into an in-use state, which it is, then move back to instances, and we're going to connect to the instance that we just attached that volume to.
So select instance two-AZA, right-click, select Connect, and then connect to that instance.
Once we connected to that instance, remember this is an instance that we haven't interacted with this EBS volume with.
So this instance has no initial configuration of this EBS volume, and if we do a DF-K, you'll see that this volume is not mounted on this instance.
What we need to do is do an LS, BLK, and this will list all of the block devices on this instance.
You'll see that it's still using XVDF because this is the device ID that we configured when attaching the volume.
Now, if we run this command, so shudu, file, S, and then the device ID of this EBS volume, notice how now it shows a file system on this EBS volume because we created it on the previous instance.
We don't need to go through all of the process of creating the file system because EBS volumes persist past the lifecycle of an EC2 instance.
You can interact with an EBS volume on one instance and then move it to another and the configuration is maintained.
We're going to follow the same process.
We're going to create a folder called EBSTEST.
Then we're going to mount the EBS volume using the device ID into this folder.
We're going to move into this folder and then if we do an LS, space-LA, and press Enter, you'll see the test file that you created in the previous step.
It still exists and all of the contents of that file are maintained because the EBS volume is persistent storage.
So that's all I wanted to verify with this instance that you can mount this EBS volume on another instance inside the same availability zone.
At this point, close down this tab and then click on Instances and we're going to shut down this second EC2 instance.
So right-click and then select Stop Instance and you'll need to confirm that process.
Wait for that instance to change into a stop state and then we're going to detach the EBS volume.
So that's moved into the stopped state.
We can select Volumes, right-click on this EBSTEST volume, detach the volume and confirm that.
Now next, we want to mount this volume onto the instance that's in Availability Zone B and we can't do that because EBS volumes are located in one specific availability zone.
Now to allow that process, we need to create a snapshot.
Snapshots are stored on S3 and replicated between multiple availability zones in that region and snapshots allow us to take a volume in one availability zone and move it into another.
So right-click on this EBS volume and create a snapshot.
Under Description, just use EBSTESTSNAP and then go ahead and click on Create Snapshot.
Just close down any dialogues, click on Snapshots and you'll see that a snapshot is being created.
Now depending on how much data is stored on the EBS volume, snapshots can either take a few seconds or anywhere up to several hours to complete.
This snapshot is a full copy of all of the data that's stored on our original EBS volume.
But because the snapshot is stored in S3, it means that we can take this snapshot, right-click, create volume and then create a volume in a different availability zone.
Now you can change the volume type, the size and the encryption settings at this point, but we're going to leave everything the same and just change the availability zone from US-EAST-1A to US-EAST-1B.
So select 1B in availability zone, click on Add Tag.
We're going to give this a name to make it easier to identify.
For the value, we're going to use EBS Test Volume-AZB.
So enter that and then create the volume.
Close down any dialogues and at this point, what we're doing is using this snapshot which is stored inside S3 to create a brand new volume inside availability zone US-EAST-1B.
At this point, once the volume is in an available state, make sure you select the right one, then we can right-click, we can attach this volume and this time when we click in the instance box, you'll see the instance that's in availability zone 1B.
So go ahead and select that and click on Attach.
Once that volume is in use, go back to Instances, select the third instance, right-click, select Connect, choose Instance Connect, verify the username and then connect to the instance.
Now we're going to follow the same process with this instance.
So first, we need to list all of the attached block devices using LSBLK.
You'll see the volume we've just created from that snapshot, it's using device ID XVDF.
We can verify that it's got a file system using the command that we've used previously and it's showing an XFS file system.
Next, we create our folder which will be our mount point.
Then we mount the device into this mount point using the same command as we've used previously, move into that folder and then do a listing using LS-LA and you should see the same test file you created earlier and if you cap this file, it should have the same contents.
This volume has the same contents because it's created from a snapshot that we created of the original volume and so its contents will be identical.
Go ahead and close down this tab to this instance, select instances, right click, stop this instance and then confirm that process.
Just wait for that instance to move into the stopped state.
We're going to move back to volumes, select the EBS test volume in availability zone 1B, detach that volume and confirm it and then just move to snapshots and I want to demonstrate how you have the option of right clicking on a snapshot.
You can copy the snapshot and choose a different regions.
So as well as snapshots giving you the option of moving EBS volume data between availability zones, you can also use snapshots to copy data between regions.
Now I'm not going to do this process but I could select a different region, for example, Asia Pacific Sydney and copy that snapshot to the Sydney region.
But there's no point doing that because we just have to remember to clean it up afterwards.
That process is fairly simple and will allow us to copy snapshots between regions.
It might take some time again depending on the amount of data within that snapshot but it is a process that you can perform either as part of data migration or disaster recovery processes.
So go ahead and click on cancel and at this point we're just going to clear things up because this is the end of this first phase of this demo lesson.
So right click on this snapshot and just delete the snapshot and confirm that.
Then go to volumes, select the volume in US East 1A, right click, delete that volume and confirm.
Select the volume in US East 1B, right click, delete volume and confirm.
And that just means we've tidied up both of those EBS volumes within this account.
Now that's the end of this first stage of this set of demo lessons.
All the steps until this point have been part of the free tier within AWS.
What follows won't be part of the free tier.
We're going to be creating a larger instant size and this will have a cost attached but I want to use it to demonstrate instant store volumes and how you can interact with them and some of their unique characteristics.
So I'm going to move into a new video and this new video will have an associated charge.
So you can either watch me perform the steps or you can do it within your own environment.
Now go ahead and complete this video and when you're ready, you can move on to the next video where we're going to investigate instant store volumes.
-
Welcome back.
In this video, we're going to be covering EBS encryption, something that's really important for the real world and for most AWS exams.
Now EBS volumes, as you know by now, are block storage devices presented over the network.
These volumes are stored in a resilient, highly available way inside an availability zone.
But at an infrastructure level, they're stored on one or more physical storage devices.
By default, no encryption is applied, so the data is persisted to disk exactly as the operating system writes it.
If you write a cat picture to a drive or a mount point inside your instance, the plain text of that cat picture is written to one or more raw disks.
Now this obviously adds risk and a potential physical attack vector for your business operations.
And EBS encryption helps to mitigate this risk.
EBS encryption provides at rest encryption for volumes and for snapshots.
So let's take a look at how it works architecturally.
EBS encryption isn't all that complex an architecture when you understand KMS, which we've already covered.
Without encryption, the architecture looks at a basic level like this.
So we have an EC2 host running in a specific availability zone.
And running on this host is an EC2 instance using an EBS volume for its boot volume.
And without any encryption, the instance generates data and this is stored on the volume in its plain text form.
So if you're storing any cats or chicken pictures on drives or mount points inside your EC2 instance, then by default, that plain text is stored at rest on the EBS volumes.
Now when you create an encrypted EBS volume initially, EBS uses KMS and a KMS key, which can either be the EBS default AWS managed key.
So this will be called AWS/ServiceName in this case EBS, or it can be a customer managed KMS key that you create and manage.
That key is used by EBS when an encrypted volume is created.
Specifically, it's used to generate an encrypted data encryption key known as a D-E-K.
And this occurs with the generate data key without plain text API call.
So you just get the encrypted data encryption key and this is stored with the volume on the raw storage.
It can only be decrypted using KMS and assuming that the entity doing so has permissions to decrypt the data encryption key using the corresponding KMS key.
Remember, initially, a volume is empty.
It's just an allocation of space, so there's nothing yet to encrypt.
When the volume is first used, either mounted on an EC2 instance by you or when an instance is launched, then EBS asks KMS to decrypt the data encryption key that's used just for this one volume.
And that key is loaded into the memory of the EC2 host which will be using it.
The key is only ever held in this decrypted form in memory on the EC2 host which is using the volume currently.
So the key is used by the host to encrypt and decrypt data between an instance and the EBS volume, specifically the raw storage that the EBS volume is stored on.
This means the data stored onto the raw storage used by the volume is ciphertext.
It's encrypted at rest.
Data only exists in an unencrypted form inside the memory of the EC2 host.
What's stored on the raw storage is the ciphertext version, the encrypted version of whatever data is written by the instance operating system.
Now when the EC2 instance moves from this host to another, the decrypted key is discarded, leaving only the encrypted version with the disk.
For that instance to use the volume again, the encrypted data encryption key needs to be decrypted and loaded into another EC2 host.
If a snapshot is made of an encrypted volume, the same data encryption key is used for that snapshot, meaning the snapshot is also encrypted.
Any volumes created from that snapshot are themselves also encrypted using the same data encryption key, and so they're also encrypted.
Now that's really all there is to the architecture.
It doesn't cost anything to use, so it's one of those things which you should really use by default.
Now I've covered the architecture in a little detail, and now I want to step through some really important summary points which will help you within the exam.
Now the exam tends to ask some pretty curveball questions around encryption, so I'm going to try and give you some hints on how to interpret and answer those.
AWS accounts can be configured to encrypt EBS volumes by default.
You can set the default KMS key to use for this encryption, or you can choose a KMS key to use manually each and every time.
The KMS key isn't used to directly encrypt or decrypt volumes, instead it's used to generate a per volume, unique data encryption key.
Now if you do make snapshots or create new volumes from those snapshots, then the same data encryption key is used, but for every single time you create a brand new volume from scratch, it uses a unique data encryption key.
So just to restress this because it's really important that data encryption key is used for that one volume, and any snapshots you take from that volume which are encrypted, and any future volumes created from that snapshot.
So that's really important to understand.
I'm going to stress it again.
I know you're getting tired of me saying this.
Every time you create an EBS volume from scratch, it uses a unique data encryption key.
If you create another volume from scratch, it uses a different data encryption key.
But if you take a snapshot of an existing encrypted volume, it uses the same data encryption key, and if you create any further EBS volumes from that snapshot, it also uses the same data encryption key.
Now there's no way to remove the encryption from a volume or a snapshot.
Once it's encrypted, it's encrypted.
There are ways that you can manually work around this by cloning the actual data from inside an operating system to an unencrypted volume, but this isn't something that's offered from the AWS console, the CLI, or the APIs.
Remember, inside an operating system, it just sees plain text, and so this is the only way that you have access to the plain text data and can clone it to another unencrypted volume.
And that's another really important point to understand.
The OS itself isn't aware of any encryption.
To the operating system, it just sees plain text because the encryption is happening between the EC2 host and the volume.
It's encrypted using AES-256, so between the EC2 host and the EBS system itself.
If you face any situations where you need the operating system to encrypt things, that's something that you'll need to configure on the operating system itself.
If you need to hold the keys, if you need the operating system to hold the keys rather than EC2, EBS, and KMS, then you need to configure volume encryption within the operating system itself.
This is commonly called software disk encryption, and this just means that the operating system does the encryption and stores the keys.
Now, you can use software disk encryption within the operating system and EBS encryption at the same time.
This doesn't really make sense for most use cases, but it can be done.
EBS encryption is really efficient though.
You don't need to worry about keys.
It doesn't cost anything, and there's no performance loss for using it.
Now, that is everything I wanted to cover in this video, so thanks for watching.
Go ahead and complete the video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and we're going to use this demo lesson to get some experience of working with EBS and Instant Store volumes.
Now before we get started, this series of demo videos will be split into two main components.
The first component will be based around EBS and EBS snapshots and all of this will come under the free tier.
The second component will be based on Instant Store volumes and will be using larger instances which are not included within the free tier.
So I'm going to make you aware of when we move on to a part which could incur some costs and you can either do that within your own environment or watch me do it in the video.
If you do decide to do it in your own environment, just be aware that there will be a 13 cents per hour cost for the second component of this demo series and I'll make it very clear when we move into that component.
The second component is entirely optional but I just wanted to warn you of the potential cost in advance.
Now to get started with this demo, you're going to need to deploy some infrastructure.
To do that, make sure that you're logged in to the general account, so the management account of the organization and you've got the Northern Virginia region selected.
Now attached to this demo is a one click deployment link to deploy the infrastructure.
So go ahead and click on that link.
That's going to open this quick create stack screen and all you need to do is scroll down to the bottom, check this capabilities box and click on create stack.
Now you're going to need this to be in a create complete state before you continue with this demo.
So go ahead and pause the video, wait for that stack to move into the create complete status and then you can continue.
Okay, now that's finished and the stack is in a create complete state.
You're also going to be running some commands within EC2 instances as part of this demo.
Also attached to this lesson is a lesson commands document which contains all of those commands and you can use this to copy and paste which will avoid errors.
So go ahead and open that link in a separate browser window or separate browser tab.
It should look something like this and we're going to be using this throughout the lesson.
Now this cloud formation template has created a number of resources, but the three that we're concerned about are the three EC2 instances.
So instance one, instance two and instance three.
So the next thing to do is to move across to the EC2 console.
So click on the services drop down and then either locate EC2 under all services, find it in recently visited services or you can use the search box at the top type EC2 and then open that in a new tab.
Now the EC2 console is going through a number of changes so don't be alarmed if it looks slightly different or if you see any banners welcoming you to this new version.
Now if you click on instances running, you'll see a list of the three instances that we're going to be using within this demo lesson.
We have instance one - az a.
We have instance two - az a and then instance one - az b.
So these are three instances, two of which are in availability zone A and one which is in availability zone B.
Next I want you to scroll down and locate volumes under elastic block store and just click on volumes.
And what you'll see is three EBS volumes, each of which is eight GIB in size.
Now these are all currently in use.
You can see that in the state column and that's because all of these volumes are in use as the boot volumes for those three EC2 instances.
So on each of these volumes is the operating system running on those EC2 instances.
Now to give you some experience of working with EBS volumes, we're going to go ahead and create a volume.
So click on the create volume button.
The first thing you'll need to do when creating a volume is pick the type and there are a number of different types available.
We've got GP2 and GP3 which are the general purpose storage types.
We're going to use GP3 for this demo lesson.
You could also select one of the provisioned IOPS volumes.
So this is currently IO1 or IO2.
And with both of these volume types, you're able to define IOPS separately from the size of the volume.
So these are volume types that you can use for demanding storage scenarios where you need high-end performance or when you need especially high performance for smaller volume sizes.
Now IO1 was the first type of provisioned IOPS SSD introduced by AWS and more recently they've introduced IO2 and enhanced it which provides even higher levels of performance.
In addition to that we do have the non-SSD volume types.
So SC1 which is cold HDD, ST1 which is throughput optimized HDD and then of course the original magnetic type which is now legacy and AWS don't recommend this for any production usage.
For this demo lesson we're going to go ahead and select GP3.
So select that.
Next you're able to pick a size in GIB for the volume.
We're going to select a volume size of 10 GIB.
Now EBS volumes are created within a specific availability zone so you have to select the availability zone when you're creating the volume.
At this point I want you to go ahead and select US-EAST-1A.
When creating volume you're also able to specify a snapshot as the basis for that volume.
So if you want to restore a snapshot into this volume you can select that here.
At this stage in the demo we're going to be creating a blank EBS volume so we're not going to select anything in this box.
We're going to be talking about encryption later in this section of the course.
You are able to specify encryption settings for the volume when you create it but at this point we're not going to encrypt this volume.
We do want to add a tag so that we can easily identify the volume from all of the others so click on add tag.
For the key we're going to use name.
For the value we're going to put EBS test volume.
So once you've entered both of those go ahead and click on create volume and that will begin the process of creating the volume.
Just close down any dialogues and then just pay attention to the different states that this volume goes through.
It begins in a creating state.
This is where the storage is being provisioned and then made available by the EBS product.
If we click on refresh you'll see that it changes from creating to available and once it's in an available state this means that we can attach it to EC2 instances.
And that's what we're going to do so we're going to right click and select attach volume.
Now you're able to attach this volume to EC2 instances but crucially only those in the same availability zone.
EBS is an availability zone scoped service and so you can only attach EBS volumes to EC2 instances within that same availability zone.
So if we select the instance box you'll only see instances in that same availability zone.
Now at this point go ahead and select instance 1 in availability zone A.
Once you've selected it you'll see that the device field is populated and this is the device ID that the instance will see for this volume.
So this is how the volume is going to be exposed to the EC2 instance.
So if we want to interact with this instance inside the operating system this is the device that we'll use.
Now different operating systems might see this in slightly different ways.
So as this warning suggests certain Linux kernels might rename SDF to XVDF.
So we've got to be aware that when you do attach a volume to an EC2 instance you need to get used to how that's seen inside the operating system.
How we can identify it and how we can configure it within the operating system for use.
And I'm going to demonstrate that in the next part of this demo lesson.
So at this point just go ahead and click on attach and this will attach this volume to the EC2 instance.
Once that's attached to the instance and you see the state change to in use then just scroll up on the left hand side and select instances.
We're going to go ahead and connect to instance 1 in availability zone A.
This is the instance that we just attached that EBS volume to so we want to interact with this instance and see how we can see the EBS volume.
So right click on this instance and select connect and then you could either connect with an SSH client or use instance connect and to keep things simple we're going to connect from our browser so select the EC2 instance connect option make sure the user's name is set to EC2-user and then click on connect.
So now we connected to this EC2 instance and it's at this point that we'll start needing the commands that are listed inside the lesson commands document and again this is attached to this lesson.
So first we need to list all the block devices which are connected to this instance and we're going to use the LSBLK command.
Now if you're not comfortable with Linux don't worry just take this nice and slowly and understand at a high level all the commands that we're going to run.
So the first one is LSBLK and this is list block devices.
So if we run this we'll be able to see a list of all of the block devices connected to this EC2 instance.
You'll see the root device this is the device that's used to boot the instance it contains the instance operating system you'll see that it's 8 gig in size and then this is the EBS volume that we just attached to this instance.
You'll see that device ID so XVDF and you'll see that it's 10 gig in size.
Now what we need to do next is check whether there is a file system on this block device.
So this block device we've created it with EBS and then we've attached it to this instance.
Now we know that it's blank but it's always safe to check if there's any file system on a block device.
So to do that we run this command.
So we're going to check are there any file systems on this block device.
So press enter and if you see just data that indicates that there isn't any file system on this device and so we need to create one.
You can only mount file systems under Linux and so we need to create a file system on this raw block device this EBS volume.
So to do that we run this command.
So shoo-doo again is just giving us admin permissions on this instance.
MKFS is going to make a file system.
We specify the file system type with hyphen t and then XFS which is a type of file system and then we're telling it to create this file system on this raw block device which is the EBS volume that we just attached.
So press enter and that will create the file system on this EBS volume.
We can confirm that by rerunning this previous command and this time instead of data it will tell us that there is now an XFS file system on this block device.
So now we can see the difference.
Initially it just told us that there was data, so raw data on this volume and now it's indicating that there is a file system, the file system that we just created.
Now the way that Linux works is we mount a file system to a mount point which is a directory.
So we're going to create a directory using this command.
MKDIR makes a directory and we're going to call the directory forward slash EBS test.
So this creates it at the top level of the file system.
This signifies root which is the top level of the file system tree and we're going to make a folder inside here called EBS test.
So go ahead and enter that command and press enter and that creates that folder and then what we can do is to mount the file system that we just created on this EBS volume into that folder.
And to do that we use this command, mount.
So mount takes a device ID, so forward slash dev forward slash xvdf.
So this is the raw block device containing the file system we just created and it's going to mount it into this folder.
So type that command and press enter and now we have our EBS volume with our file system mounted into this folder.
And we can verify that by running a df space hyphen k.
And this will show us all of the file systems on this instance and the bottom line here is the one that we've just created and mounted.
At this point I'm just going to clear the screen to make it easier to see and what we're going to do is to move into this folder.
So cd which is change directory space forward slash EBS test and then press enter and that will move you into that folder.
Once we're in that folder we're going to create a test file.
So we're going to use this command so shudu nano which is a text editor and we're going to call the file amazing test file dot txt.
So type that command in and press enter and then go ahead and type a message.
It can be anything you just need to recognize it as your own message.
So I'm going to use cats are amazing and then some exclamation marks.
Then I'm going to press control o and enter to save that file and then control x to exit again clear the screen to make it easier to see.
And then I'm going to do an LS space hyphen LA and press enter just to list the contents of this folder.
So as you can see we've now got this amazing test file dot txt.
And if we cat the contents of this so cat amazing test file dot txt you'll see the unique message that you just typed in.
So at this point we've created this file within the folder and remember the folder is now the mount point for the file system that we created on this EBS volume.
So the next step that I want you to do is to reboot this EC2 instance.
To do that type sudo space and then reboot and press enter.
Now this will disconnect you from this session.
So you can go ahead and close down this tab and go back to the EC2 console.
Just go ahead and click on instances.
Okay, so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead complete the video and when you're ready join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to evolve the infrastructure which you've been using throughout this section of the course.
In this demo lesson you're going to add private internet access capability using NAT gateways.
So you're going to be applying a cloud formation template which creates this base infrastructure.
It's going to be the animals for life VPC with infrastructure in each of three availability zones.
So there's a database subnet, an application subnet and a web subnet in availability zone A, B and C.
Now to this point what you've done is configured public subnet internet access and you've done that using an internet gateway together with routes on these public subnets.
In this demo lesson you're going to add NAT gateways into each availability zone so A, B and C and this will allow this private EC2 instance to have access to the internet.
Now you're going to be deploying NAT gateways into each availability zone so that each availability zone has its own isolated private subnet access to the internet.
It means that if any of the availability zones fail then each of the others will continue operating because these route tables which are attached to the private subnets they point at the NAT gateway within that availability zone.
So each availability zone A, B and C has its own corresponding NAT gateway which provides private internet access to all of the private subnets within that availability zone.
Now in order to implement this infrastructure you're going to be applying a one-click deployment and that's going to create everything that you see on screen now apart from these NAT gateways and the route table configurations.
So let's go ahead and move across to our AWS console and get started implementing this architecture.
Okay so now we're at the AWS console as always just make sure that you're logged in to the general AWS account as the I am admin user and you'll need to have the Northern Virginia region selected.
Now at the end of the previous demo lesson you should have deleted all of the infrastructure that you've created up until that point so the animals for live VPC as well as the Bastion host and the associated networking.
So you should have a relatively clean AWS account.
So what we're going to do first is use a one-click deployment to create the infrastructure that we'll need within this demo lesson.
So attached to this demo lesson is a one-click deployment link so go ahead and open that link.
That's going to take you to a quick create stack screen.
Everything should be pre-populated the stack name should be a4l just scroll down to the bottom check this capabilities box and then click on create stack.
Now this will start the creation process of this a4l stack and we will need this to be in a create complete state before we continue.
So go ahead pause the video wait for your stack to change into create complete and then we good to continue.
Okay so now this stacks moved into a create complete state then we good to continue.
So what we need to do before we start is make sure that all of our infrastructure has finished provisioning.
To do that just go ahead and click on the resources tab of this cloud formation stack and look for a4l internal test.
This is an EC2 instance a private EC2 instance so this doesn't have any public internet connectivity and we're going to use this to test on that gateway functionality.
So go ahead and click on this icon under physical ID and this is going to move you to the EC2 console and you'll be able to see this a4l - internal - test instance.
Now currently in my case it's showing as running but the status check is showing as initializing.
Now we'll need this instance to finish provisioning before we can continue with the demo.
What should happen is this status check should change from initializing to two out of two status checks and once you're at that point you should be able to right click and select connect and choose session manager and then have the option of connecting.
Now you'll see that I don't because this instance hasn't finished its provisioning process.
So what I want you to do is to go ahead and pause this video wait for your status checks to change to two out of two checks and then just go ahead and try to connect to this instance using session manager.
Only resume the video once you've been able to click on connect under the session manager tab and don't worry if this takes a few more minutes after the instance finishes provisioning before you can connect to session manager.
So go ahead and pause the video and when you can connect to the instance you're good to continue.
Okay so in my case it took about five minutes for this to change to two out of two checks past and then another five minutes before I could connect to this EC2 instance.
So I can right click on here and put connect.
I'll have the option now of picking session manager and then I can click on connect and this will connect me in to this private EC2 instance.
Now the reason why you're able to connect to this private instance is because we're using session manager and I'll explain exactly how this product works elsewhere in the course but essentially it allows us to connect into an EC2 instance with no public internet connectivity and it's using VPC interface endpoints to do that which I'll be explaining elsewhere in the course but what you should find when you're connected to this instance if you try to ping any internet IP address so let's go ahead and type ping and then a space 1.1.1.1.1 and press enter you'll note that we don't have any public internet connectivity and that's because this instance doesn't have a public IP version for address and it's not in a subnet with a route table which points at the internet gateway.
This EC2 instance has been deployed into the application a subnet which is a private subnet and it also doesn't have a public IP version for address.
So at this point what we need to do is go ahead and deploy our NAT gateways and these NAT gateways are what will provide this private EC2 instance with connectivity to the public IP version for internet so let's go ahead and do that.
Now to do that we need to be back at the main AWS console click in the services search box at the top type VPC and then right click and open that in a new tab.
Once you do that go ahead and move to that tab once you there click on NAT gateways and create a NAT gateway.
Okay so once you're here you'll need to specify a few things you'll need to give the NAT gateway a name you'll need to pick a public subnet for the NAT gateway to go into and then you'll need to give the NAT gateway an elastic IP address which is an IP address which doesn't change.
So first we'll set the name of the NAT gateway and we'll choose to use a4l for animals for life -vpc1 -natgw and then -a because this is going into availability zone A.
Next we'll need to pick the public subnet that the NAT gateway will be going into so click on the subnet drop down and then select the web a subnet which is the public subnet in availability zone a so sn -web -a.
Now we need to give this NAT gateway an elastic IP it doesn't currently have one so we need to click on allocate elastic IP which gives it an allocation.
Don't worry about the connectivity type we'll be covering that elsewhere in the course just scroll down to the bottom and create the NAT gateway.
Now this process will take some time and so we need to go ahead and create the two other NAT gateways.
So click on NAT gateways at the top and then we're going to create a second NAT gateway.
So go ahead and click on create NAT gateway again this time we'll call the NAT gateway a4l -vpc1 -natgw -b and this time we'll pick the web b subnet so sn -web -b allocated elastic IP again and click on create NAT gateway then we'll follow the same process a third time so click create NAT gateway use the same naming scheme but with -c pick the web c subnet from the list allocate an elastic IP and then scroll down and click on create NAT gateway and at this point we've got the three NAT gateways that are being created they're all in appending state if we go to elastic IPs we can see the three elastic IPs which have been allocated to the NAT gateways and we can scroll to the right or left and see details on these IPs and if we wanted we could release these IPs back to the account once we'd finish with them now at this point you need to go ahead and pause the video and resume it once all three of those NAT gateways have moved away from appending state we need them to be in an available state ready to go before we can continue with this demo so go ahead and pause and resume once all three have changed to an available state okay so all these are now in an available state so that means they're good to go they're providing service now if you scroll to the right in this list you're able to see additional information about these NAT gateways so you can see the elastic and private IP address the VPC and then the subnet that each of these NAT gateways are located in what we need to do now is configure the routing so that the private instances can communicate via the NAT gateways so right click on route tables and open in a new tab and we need to create a new route table for each of the availability zones so go ahead and click on create route table first we need to pick the VPC for this route table so click on the VPC drop down and then select the animals for live VPC so a for L hyphen VPC one once selected go ahead and name at the route table we're going to keep the naming scheme consistent so a for L hyphen VPC one hyphen RT for route table hyphen private a so enter that and click on create then close that dialogue down and create another route table this time we'll use the same naming scheme but of course this time it will be RT hyphen private B select the animals for life VPC and click on create close that down and then finally click on create route table again this time a for L hyphen VPC one hyphen RT hyphen private C again click on the VPC drop down and select the animals for life VPC and then click on create so that's going to leave us with three route tables one for each availability zone what we need to do now is create a default route within each of these route tables and that route is going to point at the NAT gateway in the same availability zone so select the route table private a and then click on the routes tab once you've selected the routes tab click on edit routes and we're going to add a new route it's going to be the IP version for default route of 0.0.0.0/0 and then click on target and pick NAT gateway and we're going to pick the NAT gateway in availability zone a and because we named them it makes it easy to select the relevant one from this list so go ahead and pick a for L hyphen VPC one hyphen NAT GW hyphen a so because this is the route table in availability zone a we need to pick the same NAT gateway so save that and close and now we'll be doing the same process for the route table in availability zone B make sure the routes tab is selected and click on edit routes click on add route again 0.0.0.0/0 and then for target pick NAT gateway and then pick the NAT gateway that's in availability zone B so NAT GW hyphen B once you've done that save the route table and then next select the route table in availability zone C so select RT hyphen private C make sure the routes tab is selected and click on edit routes again we'll be adding a route it will be the IP version for default route so 0.0.0.0/0 select a target go to NAT gateway and pick the NAT gateway in availability zone C so NAT GW hyphen C once you've done that save the route table and now our private EC2 instance should be able to ping 1.1.1.1 because we have the routing infrastructure in place so let's move back to our private instance and we can see that it's not actually working now the reason for this is that although we have created these routes we haven't actually associated these route tables with any of the subnets subnets in a VPC which don't have an explicit route table association are associated with the main route table now we need to explicitly associate each of these route tables with the subnets inside that same AZ so let's go ahead and pick RT hyphen private A we'll go through in order so select it click on the subnet associations tab and edit subnet associations and then you need to pick all of the private subnets in AZ A so that's the reserved subnet so reserved hyphen A the app subnet so app hyphen A and the DB subnet so DB hyphen A so all of these are the private subnets in availability zone A notice how all the public subnets are associated with this custom route table you created earlier but the ones we're setting up now are still associated with the main route table so we're going to resolve that now by associating this route table with those subnets so click on save and this will associate all of the private subnets in AZ A with the AZ A route table so now we're going to do the same process for AZ B and AZ C and we'll start with AZ B so select the private B route table click on subnet associations edit subnet associations so select application B database B and then reserved B and then scroll down and save the associations and then select the private C route table click on subnet associations edit subnet associations and then select reserved C database C and then application C and then scroll down and save those associations and now that we've associated these route tables with the subnets and now that we've added those default routes if we go back to session manager where we still have the connection open to the private EC2 instance we should see that the ping has started to work and that's because we now have a NAT gateway providing service to each of the private subnets in all of the three availability zones okay so that's everything you needed to cover in this demo lesson now it's time to clean up the account and return it to the same state as it was at the start of this demo lesson from this point on within the course you're going to be using automation and so we can remove all the configuration that we've done inside this demo lesson so the first thing we need to do is to reverse the route table changes that we've done so we need to go ahead and select the RT hyphen private a route table go ahead and select subnet associations and then edit the subnet associations and then just uncheck all of these subnets and this will return these to being associated with the main route table so scroll down and click on save do the same for RT hyphen private be so deselect all of these associations and click on save and then the same for RT hyphen private see so select it go to subnet associations and then edit them and remove all of these subnets and click on save next select all of these private route tables these are the ones that we created in this lesson so select them all click on the actions drop down and then delete route table and confirm by clicking delete route tables go to NAT gateways on the left and we need to select each of the NAT gateways in turn so a and then click on actions and delete NAT gateway type delete click delete then select be and do the same process actions delete NAT gateway type delete click delete and finally the same for see so select the C NAT gateway click on actions and delete NAT gateway you'll need to type delete to confirm click on delete now we're going to need all of these to be in a fully deleted state before we can continue so hit refresh and make sure that all three NAT gateways are deleted if yours aren't deleted if they're still listed in a deleting state then go ahead and pause the video and resume once all of these have changed to deleted at this point all of the NAT gateways have deleted so you can go ahead and click on elastic IPs and we need to release each of these IPs so select one of them and then click on actions and release elastic IP addresses and click release and do the same process for the other two click on release then finally actions release IP click on release once that's done move back to the cloud formation console select the stack which was created by the one click deployment at the start of the lesson and click on delete and then confirm that deletion and that will remove the cloud formation stack and any resources created as part of this demo and at that point once that finishes deleting the account has been returned into the same state as it was at the start of this demo lesson so I hope this demo lesson has been useful just to reiterate what you've done you've created three NAT gateways for a region resilient design you've created three route tables one in each availability zone added a default IP version for route pointing at the corresponding NAT gateway and associated each of those route tables with the private subnets in those availability zones so you've implemented a regionally resilient NAT gateway architecture so that's a great job that's a pretty complex demo but it's going to be functionality that will be really useful if you're using AWS in the real world or if you have to answer any exam questions on NAT gateways with that being said at this point you have cleared up the account you've deleted all the resources so go ahead complete this video and when you're ready I'll see you in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I'm going to be covering a topic which is probably slightly beyond what you need for the Solutions Architect Associate exam but additional understanding of CloudFormation is never a bad thing and it will help you answer any automation style questions in the exam and so I'm going to talk about it anyway.
CloudFormation in it is a way that you can pass complex bootstrapping instructions into an EC2 instance.
It's much more complex than the simple user data example that you saw in the previous lesson.
Now we do have a lot to cover so let's jump in and step through the theory before we move to another demo lesson.
In the previous lesson I showed you how CloudFormation handled user data.
It works in a similar way to the console UI where you pass in base 64 encoded data into the instance operating system and it runs as a shell script.
Now there's another way to configure EC2 instances, a way which is much more powerful.
It's called cfn-init and it's officially referred to by AWS as a helper script which is installed on EC2 operating systems such as Amazon Linux 2.
Now cfn-init is actually much more than a simple helper script.
It's much more like a simple configuration management system.
User data is what's known as procedural.
It's a script, it's run by the operating system line by line.
Now cfn-init can also be procedural, it can be used to run commands just like user data but it can also be desired state where you direct it how you want something to be.
What's the desired state of an EC2 instance and it will perform whatever is required to move the instance into that desired state.
So for example you can tell cfn-init that you want a certain version of the Apache web server to be installed and if that's already the case if Apache is already installed and it's the same version then nothing is done.
However if Apache is not installed then cfn-init will install it or it will update any older versions to that version. cfn-init can do lots of pretty powerful things.
It can make sure packages are installed even with an awareness of versions.
It can manipulate operating system groups and users.
It can download sources and extract them onto the local instance even using authentication.
It can create files with certain contents permissions and ownerships.
It can run commands and test that certain conditions are true after the commands have run and it can even control services on an instance.
So in ensuring that a particular service is started or enabled to be started on the boot of the OS. cfn-init is executed like any other command by being passed into the instance as part of the user data and it retrieves its directives from the cloud formation stack and you define this data in a special part of each logical resource inside cloud formation templates called aws double colon cloud formation double colon init and don't worry you'll get a chance to see this very soon in the demo.
So the instance runs cfn-init it pulls this desired state data from the cloud formation stack that you put in there via the cloud formation template and then it implements the desired state that's specified by you in that data.
So let's quickly look at this architecture visually.
The way that cfn-init works is probably going to be easier to understand if we do take a look at it visually.
Once you see the individual components it's a lot simpler than I've made it sound on the previous screen.
It all starts off with a cloud formation template and this one creates an EC2 instance and you'll see this in action yourself very soon.
Now the template has a logical resource inside it called EC2 instance which is to create an EC2 instance.
It has this new special component metadata and aws double colon cloud formation double colon init and this is where the cfn init configuration is stored.
The cfn init command itself is executed from the user data that's passed into that instance.
So the cloud formation template is used to create a stack which itself creates an EC2 instance and the cfn-init line in the user data at the bottom here is executed by the instance.
This should make sense now.
Anything in the user data is executed when the instance is first launched.
Now if you look at the command for cfn-init you'll notice that it specifies a few variables specifically a stack ID and a region.
Remember this instance is being created using cloud formation and so these variables are actually replaced for the actual values before this ends up inside the EC2 instance.
So the region will be replaced with the actual region that the stack is created in and the stack ID is the actual stack ID that's being created by this template and these are all passed in to cfn-init.
This allows cfn-init to communicate with the cloud formation service and receive its configuration and it can do that because of those variables passed into the user data by cloud formation.
Once cfn-init has this configuration then because it's a desired state system it can implement the desired state that's specified inside the cloud formation by you and another amazing thing about this process or about cfn-init and its associated tools is that it can also work with stack updates.
Remember that the user data works once while cfn-init can be configured to watch for updates to the metadata on an object in a template and if that metadata changes then cfn-init can be executed again and it will update the configuration of that instance to the desired state specified inside the template.
It's really powerful.
Now this is not something that user data can do.
User data only works the once when you launch the instance.
Now in the demo lesson which immediately follows this one you're going to experience just how cool this cfn-init process is.
The WordPress cloud formation template that you used in the previous demo which included some user data.
I've updated that and I've supplied a new version which uses this cloud formation in its process or cfn-init so you'll get to see how it's different and exactly how that looks when you apply it into your AWS account.
Now there's one more really important feature of cloud formation which I want to cover as you start performing more advanced bootstrapping it will start to matter more and more.
This feature is called cloud formation creation policies and cloud formation signals so let's look at that next.
On the previous example there was another line passed into the user data the bottom line cfn signal.
Without this the resource creation process inside cloud formation is actually pretty dumb.
You have a template which is used to create a stack which creates an EC2 instance.
Let's say you pass in some user data this runs and then the instance is marked as complete.
The problem though is we don't actually know if the resource actually completed successfully.
Cloud formation has created the resource and passed in the user data but I've already said that cloud formation doesn't understand the user data it just passes it in.
So if the user data has a problem if the instance bootstrapping process fails and from a customer perspective the instance doesn't really work cloud formation won't know.
The instance is going to be marked as complete regardless of how the configuration is inside that instance.
Now this is fine when we're creating resources like a blank EC2 instance when there is no post launch configuration.
If EC2 reports to cloud formation that it's successfully provisioned an instance then we can rely on that.
If we're creating an S3 bucket and S3 reports to cloud formation that it's worked okay then it's worked okay.
But what if there's extra configuration happening inside the resource such as this bootstrapping process.
We need a better way a way that the resource itself the EC2 instance in this case can inform cloud formation if it's being configured correctly or not.
This is how creation policies work and this is a creation policy.
A creation policy is something which is added to a logical resource inside a cloud formation template.
You create it and you supply a timeout value.
This one has 15 minutes and this is used to create a stack which creates an instance.
So far the process is the same but at this point cloud formation waits.
It doesn't move the instance into a create complete status when EC2 signals that it's been created successfully.
Instead it waits for a signal a signal from the resource itself.
So even though EC2 has launched the instance even though its status checks pass and it's told cloud formation that the instance is provisioned and ready to go.
Cloud formation waits.
It waits for a signal from the resource itself.
The CFN signal command at the bottom is given the stack ID, the resource name and the region and these are passed in by the cloud formation stack when the resource is created.
So the CFN signal command understands how to communicate with the specific cloud formation stack that it's running inside.
The -e $question mark part of that command represents the state of the previous command.
So in this case the CFN init command is going to perform this desired state configuration and if the output of that command is an OK state then the OK is sent as a signal by CFN signal.
If CFN init reports an error code then this is sent using CFN signal to the cloud formation stack.
So CFN signal is reporting to cloud formation the success or not of the CFN init bootstrapping and this is reported to the cloud formation stack.
If it's a success code so if CFN init worked as intended then the resource is moved into a create complete state.
If CFN signal reports an error the resource in cloud formation shows an error.
If nothing happens for 15 minutes the timeout value then cloud formation assumes it's erred and doesn't let the stack create successfully.
The resource will generate an error.
Now you'll see creation policies feature in more complex cloud formation templates either within EC2 instance resources or within auto scaling groups that we'll be covering later in the course.
Now you won't need to know the technical implementation details of this for the Solutions Architect Associate exam but I do expect the knowledge of this architecture will help you in any automation related questions.
And now it's time for a quick demonstration.
I just want you to have some experience in using a template which uses CFN init and also one which uses the creation policy.
So I hope this theory has been useful to you and when you're ready for the demo go ahead and complete this video and you can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in the first real lesson of the advanced EC2 section of the course, I want to introduce EC2 Bootstrapping.
Now this is one of the most powerful features of EC2 available for us to use as solutions architects because it's what allows us to begin adding automation to the solutions that we design.
Bootstrapping is a process where scripts or other bits of configuration can be run when an instance is first launched, meaning that an instance can be brought into service in a certain pre-configured state.
So unlike just launching an instance with an AMI and having it be in its default state, we can bootstrap in a certain set of configurations or software installs.
Now let's look at how this works from a theory perspective and then you'll get a chance to implement this yourself in the following demo lesson.
Now bootstrapping is a process which exists outside EC2.
It's a general term.
In systems automation, bootstrapping is a process which allows a system to self-configure or perform some self-configuration steps.
In EC2, it allows for build automation, some steps which can occur when you launch an instance to bring that instance into a configured state.
Rather than relying on a default AMI or an AMI with a pre-baked configuration, it allows you to direct an EC2 instance to do something when launched.
So perform some software installations and then some post-installation configuration.
With EC2, bootstrapping is enabled using EC2 user data and this is injected into the instance in the same way that metadata is.
In fact, it's accessed using the metadata IP address.
So 169.254, 169.254, also known as 169.254 repeating.
But instead of latest /meta-data, it's latest /user-data.
The user data is a piece of data, a piece of information that you can pass into an EC2 instance.
Anything that you pass in is executed by the instances operating system.
And here's the important thing to remember, it's executed only once at launch time.
If you update the user data and restart an instance, it's not executed again.
So it's only the once.
User data applies only to the first initial launch of the instance.
It's for launch time configuration only.
Now another important aspect is that EC2 as a service doesn't validate this user data, it doesn't interpret it in any way.
You can tell EC2 to pass in some random data and it will.
You can tell EC2 to pass in commands which will delete all of the data on the boot volume and the instance will do so.
EC2 doesn't interpret the data, it just passes the data into the instance via user data and there's a process on the operating system which runs this as the root user.
So in summary, the instance needs to understand what you pass in because it's just going to run it.
Now the bootstrapping architecture is pretty simple to understand and AMI is used to launch an EC2 instance in the usual way and this creates an EBS volume which is attached to the EC2 instance and that's of course based on the block device mapping inside the AMI.
This part we understand already.
Where it starts to differ is that now the EC2 service provides some user data through to the EC2 instance and there's software within the operating system running on EC2 instances which is designed to look at the metadata IP for any user data and if it sees any user data then it executes this on launch of that instance.
Now this user data is treated just like any other script that the operating system runs.
It needs to be valid and at the end of running the script the EC2 instance will either be in a running state and ready for service meaning that the instance has finished its startup process, the user data ran and it was successful and the instance is in a functional and running state.
Or the worst case is that the user data errors in some way so the instance would still be in a running state because the user data is separate from EC2, EC2 just delivers it into the instance.
The instance would still pass its status checks and assuming you didn't run anything which deleted mass amounts of OS data you could probably still connect to it but the instance would likely not be configured as you want.
It would be a bad configuration.
So that's critical to understand the user data is just passed in in an opaque way to the operating system.
It's up to the operating system to execute it and if executed correctly the instance will be ready for service.
If there's a problem with the user data you will have a bad config.
This is one of the key elements of user data to understand.
It's one of the powerful features but also one of the risky ones.
You pass the instance user data as a block of data.
It runs successfully or it doesn't.
From EC2's perspective it's simply opaque data.
It doesn't know or care what happens to it.
Now user data is also not secure.
Anyone who can access the instance operating system can access the user data so don't use it for passing in any long term credentials at least not ideally.
Now in the demo we'll be doing just that.
We'll be doing bad practice by passing into the instance using the user data some long term credentials but this is intentional.
It's part of your learning process.
As we move through the course we'll evolve the design and implementations to use more AWS services and some of these include better ways to handle secrets inside EC2.
So I need to show you the bad practice before I can compare it to the good.
Now user data is limited to 16 kilobytes in size.
For anything more complex than that you would need to pass in a script which would download that larger data.
User data can be modified if you shut down the instance, change the user data and start it up again then new data is available inside the instances user data.
But the contents are only executed once when you initially launch that instance.
So after the launch stage user data is only really useful for passing data in and there are better ways of doing that.
So keep in mind for the exam user data is generally used the once for the post launch configuration of an instance.
It's only executed the one initial time.
Now one of the question types that you'll often face in the exam relates to how quickly you can bring an instance into service.
There's actually a metric boot time to service time.
How quickly after you launch an instance is it ready for service, ready to accept connections from your customers.
And this includes the time that AWS require to provision the EC2 instance and the time taken for any software updates, installations or configurations to take place within the operating system.
For an AWS provided AMI that time can be measured in minutes from launch time to service time it's generally only minutes.
But what if you need to do some extra configuration maybe install an application.
Remember when you manually installed WordPress after launching an instance this is known as post launch time.
The time required after launch for you to perform manual configuration or automatic configuration before the instance is ready for service.
If you do it manually this can be a few minutes or even as long as a few hours for things which are significantly more complex.
Now you can shorten this post launch time in a few ways.
The topic of this very lesson is bootstrapping and bootstrapping as a process automates installations after the launch of an instance and this reduces the amount of time taken to perform these steps.
And you'll see that demoed in the next lesson.
Now alternatively you can also do the work in advance by AMI baking.
With this method you're front loading the work doing it in advance and creating an AMI with all of that work baked in.
Now this removes the post launch time but it means you can't be as flexible with the configuration because it has to be baked into the AMI.
Now the optimal way is to combine both of these processes so AMI baking and bootstrapping.
You'd use AMI baking for any part of the process which is time intensive.
So if you have an application installation process which is 90% installation and 10% configuration, you can AMI bake in the 90% part and then bootstrap the final configuration.
That way you reduce the post launch time and thus the boot time to service time but you also get to use bootstrapping which gives you much more configurability.
And I'll be demonstrating this architecture later in the course when I cover scaling and high availability but I wanted to introduce these concepts now so you can keep mulling them over and understand them when I mention them.
But now it's time to finish up this lesson and this has been the theory component of EC2 bootstrapping.
In the next lesson which is a demo you're going to have the chance to use the EC2 user data feature.
Remember earlier in the course where we built an AMI together we installed WordPress to the point when it was ready to install and we massively improved the login banner of the EC2 instance to be something more animal related with Cowsay.
In the next demo lesson you're going to implement the same thing but you're going to be using user data.
You'll see how much quicker this process is to do though when you had to manually launch the instance and run each command one by one.
It's going to be a good valuable demo, I can't wait to get started so go ahead, finish this video and when you're ready you can join me for some practical time.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video, I want to talk at a very basic level about the Elastic Kubernetes Service known as EKS.
Now this is AWS's implementation of Kubernetes as a service.
If you haven't already done so, please make sure that you've watched my Kubernetes 101 video because I'll be assuming that level of knowledge so I can focus more in this video about the EKS specific implementation.
Now this video is going to stay at a very high level and if required for the topic that you're studying, there are going to be additional deep dive videos and/or demos on any of the relevant subject areas.
Now let's just jump in and get started straight away.
So EKS is an AWS managed implementation of Kubernetes.
That's to say, AWS have taken the Kubernetes system and added it as a service within AWS.
It's the same Kubernetes that you'll see anywhere else just extended to work really well within AWS.
And that's the key point here.
Kubernetes is cloud agnostic.
So if you need containers, but don't want to be locked into a specific vendor, or if you already have containers implemented, maybe using Kubernetes, then that's a reason to choose EKS.
Now EKS can be run in different ways.
It can run on AWS itself.
It can run on AWS Outposts, which conceptually is like running a tiny version of AWS on-premises.
It can run using EKS anywhere, which basically allows you to create EKS clusters on-premises or anywhere else.
And AWS even release the EKS product as open source via the EKS distro.
Generally though, and certainly for this video, you can assume that I mean the normal AWS deployment mode of EKS.
So running EKS within AWS as a product.
So the Kubernetes control plane is managed by AWS and scales based on load and also runs across multiple availability zones.
And the product integrates with other AWS services in the way that you would expect an AWS product to do so.
So it can use the Elastic Container Registry or ECR.
It uses Elastic Load Balancers anywhere where Kubernetes needs load balancer functionality.
IAM is integrated for security and it also uses VPCs for networking.
EKS clusters mean the EKS control plane.
So that's the bit that's managed by AWS as well as the EKS nodes.
And I'll talk more about those in a second.
ETCD, remember, this is the key value store which Kubernetes uses.
This is also managed by AWS and distributed across multiple availability zones.
Now in terms of nodes, you have a few different ways that these can be handled.
You can do self-managed nodes running in a group.
So these are EC2 instances which you manage and you're billed for based on normal EC2 pricing.
Then we have managed node groups which are still EC2, but this is where the product handles the provisioning and lifecycle management.
Finally, you can run pods on Fargate.
With Fargate, you don't have to worry about provisioning, configuring, or scaling groups of instances.
You also don't need to choose the instance type or decide when to scale or optimize cluster packing.
Instead, you define Fargate profiles which mean that pods can start on Fargate.
And in general, this is similar to ECS Fargate which I've already covered elsewhere.
Now one super important thing to keep in mind, deciding between self-managed, managed node groups or Fargate is based on your requirements.
So if you need Windows pods, GPU capability, Inferentia, Bottle Rocket, Outposts, or Local Zones, then you need to check the node type that you're going to use and make sure it's capable of each of these features.
I've included a link attached to this lesson with an up-to-date list of capabilities, but please be really careful on this one because I've seen it negatively impact projects.
Now lastly, remember from the Kubernetes 101 video where I mentioned storage by default is ephemeral.
Well, for persistent storage, EKS can use EBS, EFS, and FSX as storage providers.
And these can be used to provide persistent storage when required for the product.
Now that's everything about the key elements of the EKS product.
Let's quickly take a look visually at how a simple EKS architecture might look.
Conceptually, when you think of an EKS deployment, you're going to have two VPCs.
The first is an AWS managed VPC, and it's here where the EKS control plane will run from across multiple availability zones.
The second VPC is a custom managed VPC, in this case, the Animals for Life VPC.
Now, if you're going to be using EC2 worker nodes, then these will be deployed into the customer VPC.
Now, normally the control plane will communicate with these worker nodes via elastic network interfaces which are injected into the customer VPC.
So the Kubelet service running on the worker nodes connects to the control plane, either using these ENIs which are injected into the VPC, but it can also use a public control plane endpoint.
Any administration via the control plane can also be done using this public endpoint.
And any consumption of the EKS services is via ingress configurations which start from the customer VPC.
Now, at a high level, that's everything that I wanted to cover about the EKS product.
Once again, if you're studying a course which needs any further detail, there will be additional theory and demo lessons.
But at this point, that's everything I want you to do in this video, so go ahead and complete the video.
And when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this fundamentals video I want to briefly talk about Kubernetes which is an open source container orchestration system.
You use it to automate the deployment, scaling and management of containerized applications.
At a super high level, Kubernetes lets you run containers in a reliable and scalable way, making efficient use of resources and lets you expose your containerized applications to the outside world or your business.
It's like Docker, only with robots to automate it and super intelligence for all of the thinking.
Now Kubernetes is a cloud agnostic product so you can use it on-premises and within many public cloud platforms.
Now I want to keep this video to a super high level architectural overview but that's still a lot to cover.
So let's jump in and get started.
Let's quickly step through the architecture of a Kubernetes cluster.
A cluster in Kubernetes is a highly available cluster of compute resources and these are organized to work as one unit.
The cluster starts with the cluster control plane which is the part which manages the cluster.
It performs scheduling, application management, scaling and deployment and much more.
Compute within a Kubernetes cluster is provided via nodes and these are virtual or physical servers which function as a worker within the cluster.
These are the things which actually run your containerized applications.
Running on each of the nodes is software and at minimum this is container D or another container runtime which is the software used to handle your container operations and next we have KubeLit which is an agent to interact with the cluster control plane.
KubeLit running on each of the nodes communicates with the cluster control plane using the Kubernetes API.
Now this is the top level functionality of a Kubernetes cluster.
The control plane orchestrates containerized applications which run on nodes.
But now let's explore the architecture of control planes and nodes in a little bit more detail.
On this diagram I've zoomed in a little.
We have the control plane at the top and a single cluster node at the bottom complete with the minimum Docker and KubeLit software running for control plane communications.
Now I want to step through the main components which might run within the control plane and on the cluster nodes.
Keep in mind this is a fundamental level video.
It's not meant to be exhaustive.
Kubernetes is a complex topic so I'm just covering the parts that you need to understand to get started.
The cluster will also likely have many more nodes.
It's rare that you only have one node unless this is a testing environment.
First I want to talk about pods and pods are the smallest unit of computing within Kubernetes.
You can have pods which have multiple containers and provide shared storage and networking for those pods but it's very common to see a one container one pod architecture which as the name suggests means each pod contains only one container.
Now when you think about Kubernetes don't think about containers think about pods.
You're going to be working with pods and you're going to be managing pods.
The pods handle the containers within them.
Architecturally you would generally only run multiple containers in a pod when those containers are tightly coupled and require close proximity and rely on each other in a very tightly coupled way.
Additionally although you'll be exposed to pods you'll rarely manage them directly.
Pods are non-permanent things.
In order to get the maximum value from Kubernetes you need to view pods as temporary things which are created, do a job and are then disposed of.
Pods can be deleted when finished, evicted for lack of resources or if the node itself fails.
They aren't permanent and aren't designed to be viewed as highly available entities.
There are other things linked to pods which provide more permanence but more on that elsewhere.
So now let's talk about what runs on the control plane.
Firstly I've already mentioned this one, the API known formally as kube-api server.
This is the front end for the control plane.
It's what everything generally interacts with to communicate with the control plane and it can be scaled horizontally for performance and to ensure high availability.
Next we have ETCD and this provides a highly available key value store.
So a simple database running within the cluster which acts as the main backing store for data for the cluster.
Another important control plane component is kube-scheduler and this is responsible for constantly checking for any pods within the cluster which don't have a node assigned.
And then it assigns a node to that pod based on resource requirements, deadlines, affinity or anti affinity, data locality needs and any other constraints.
Remember nodes are the things which provide the raw compute and other resources to the cluster and it's this component which makes sure the nodes get utilized effectively.
Next we have an optional component, the cloud controller manager and this is what allows kubernetes to integrate with any cloud providers.
It's common that kubernetes runs on top of other cloud platforms such as AWS, Azure or GCP and it's this component which allows the control plane to closely interact with those platforms.
Now it is entirely optional and if you run a small kubernetes deployment at home you probably won't be using this component.
Now lastly in the control plane is the kube controller manager and this is actually a collection of processes.
We've got the node controller which is responsible for monitoring and responding to any node outages, the job controller which is responsible for running pods in order to execute jobs, the end point controller which populates end points in the cluster, more on this in a second but this is something that links services to pods.
Again I'll be covering this very shortly and then the service account and token controller which is responsible for account and API token creation.
Now again I haven't spoken about services or end points yet, just stick with me, I will in a second.
Now lastly on every node is something called kproxy known as kube proxy and this runs on every node and coordinates networking with the cluster control plane.
It helps implement services and configures rules allowing communications with pods from inside or outside of the cluster.
You might have a kubernetes cluster but you're going to want some level of communication with the outside world and that's what kube proxy provides.
Now that's the architecture of the cluster and nodes in a little bit more detail but I want to finish this introduction video with a few summary points of the terms that you're going to come across.
So let's talk about the key components so we start with the cluster and conceptually this is a deployment of kubernetes, it provides management, orchestration, healing and service access.
Within a cluster we've got the nodes which provide the actual compute resources and pods run on these nodes.
A pod is one or more containers and is the smallest admin unit within kubernetes and often as I mentioned previously you're going to see the one container one pod architecture.
Simply put it's cleaner.
Now a pod is not a permanent thing, it's not long lived, the cluster can and does replace them as required.
Services provide an abstraction from pods so the service is typically what you will understand as an application.
An application can be containerized across many pods but the service is the consistent thing, the abstraction.
Service is what you interact with if you access a containerized application.
Now we've also got a job and a job is an ad hoc thing inside the cluster.
Think of it as the name suggests as a job.
A job creates one or more pods, runs until it completes, retries if required and then finishes.
Now jobs might be used as back end isolated pieces of work within a cluster.
Now something new that I haven't covered yet and that's ingress.
Ingress is how something external to the cluster can access a service.
So you have external users, they come into an ingress, that's routed through the cluster to a service, the service points at one or more pods which provides the actual application.
So an ingress is something that you will have exposure to when you start working with Kubernetes.
And next is an ingress controller and that's a piece of software which actually arranges for the underlying hardware to allow ingress.
For example there is an AWS load balancer ingress controller which uses application and network load balancers to allow the ingress but there are also other controllers such as engine X and others for various cloud platforms.
Now finally and this one is really important, generally it's best to architect things within Kubernetes to be stateless from a pod perspective.
Remember pods are temporary.
If your application has any form of long running state then you need a way to store that state somewhere.
Now state can be session data but also data in the more traditional sense.
Any storage in Kubernetes by default is ephemeral provided locally by a node and thus if a pod moves between nodes then that storage is lost.
Conceptually think of this like instance store volumes running on AWS EC2.
Now you can configure persistent storage known as persistent volumes or PVs and these are volumes whose life cycle lives beyond any one single pod which is using them and this is how you would provision normal long running storage to your containerised applications.
Now the details of this are a little bit beyond this introduction level video but I wanted you to be aware of this functionality.
Ok so that's a high level introduction to Kubernetes.
It's a pretty broad and complex product but it's super powerful when you know how to use it.
This video only scratches the surface.
If you're watching this as part of my AWS courses then I'm going to have follow up videos which step through how AWS implements Kubernetes with their EKS service.
If you're taking any of the more technically deep AWS courses then maybe other deep dive videos into specific areas that you need to be aware of.
So there may be additional videos covering individual topics at a much deeper level.
If there are no additional videos then don't worry because that's everything that you need to be aware of.
Thanks for watching this video, go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to quickly cover the theory of the Elastic Container Registry or ECR.
Now I want to keep the theory part brief because you're going to get the chance to experience this in practice elsewhere in the course and this is one of those topics which is much easier to show you via a demo versus covering the theory.
So I'm going to keep this as brief as possible so let's jump in and get started.
Well let's first look at what the Elastic Container Registry is.
Well it's a managed container image registry service.
It's like Docker Hub but for AWS so this is a service which AWS provide which hosts and manages container images and when I talk about container images I mean images which can be used within Docker or other container applications.
So think things like ECS or EKS.
Now within the ECR product we have public and private registries and each AWS account is provided with one of each so this is the top level structure.
Inside each registry you can have many repositories so you can think of these like repos within a source control system so think of Git or GitHub you can have many repositories.
Now inside each repository you can have many container images and container images can have several tags and these tags need to be unique within your repository.
Now in terms of the security architecture the differences between public and private registries are pretty simple.
First a public registry means that anyone can have read-only access to anything within that registry but read-write requires permissions.
The other side is that a private registry means that permissions are required for any read or any read- write operations so this means with a public registry anyone can pull but to push you need permissions and for a private registry permissions are required for any operations.
So that's the high level architecture let's move on and talk about some of the benefits of the elastic container registry.
Well first and foremost it's integrated with IAM and this is logically for permissions so this means that all permissions controlling access to anything within the product are controlled using IAM.
Now ECR offers security scanning on images and this comes in two different flavors basic and enhanced and enhanced is a relatively new type of scanning and this uses the inspector product.
Now this can scan looking for issues with both the operating system and any software packages within your containers and this works on a layer by layer basis so enhanced scanning is a really good piece of additional functionality that the product provides.
Now logically like many other AWS products ECR offers near real-time metrics and these are delivered into CloudWatch.
Now these metrics are for things like authentication or push or pull operations against any of the container images.
ECR also logs all API actions into CloudTrail and then also it generates events which are delivered into EventBridge and this can form part of an event-driven workflow which involves container images.
Now lastly ECR offers replication of container images and this is both cross region and cross account so these are all important features provided by ECR.
Now as I mentioned at the start of this lesson all I wanted to do is to cover the high-level theory of this product.
It's far easier to gain an understanding of the product by actually using it.
So elsewhere in the course you're going to get the chance to use ECR in some container-based workflows so you'll get the chance to push some container images into the product and pull them when you're deploying your container-based applications.
Now that's everything I wanted to cover in this video so go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to briefly discuss the two different cluster modes that you can use when running containers within ECS.
So that's EC2 mode and Fargate mode.
The cluster mode defines a number of things but one of them is how much of the admin overheads surrounding running a set of container hosts that you manage versus how much AWS manage.
So the technical underpinnings of both of these are important but one of the main differentiating facts is what parts you're responsible for managing and what parts AWS manage.
There are some cost differences we'll talk about and certain scenarios which favor EC2 mode and others which favor Fargate mode and we'll talk about all of that inside this lesson.
At this level it's enough to understand the basic architecture of both of these modes and the situations where you would pick one over the other.
So we've got a lot to cover so let's jump in and get started.
The first cluster mode available within ECS is EC2 mode.
Using EC2 mode we start with the ECS management components so these handle high level tasks like scheduling, orchestration, cluster management and the placement engine which handles where to run containers so which container hosts.
Now these high level components exist in both modes so that's EC2 mode and Fargate mode.
With EC2 mode an ECS cluster is created within a VPC inside your AWS account.
Because an EC2 mode cluster runs within a VPC it benefits from the multiple availability zones which are available within this VPC.
For this example let's assume we have two AZA and AZB.
With EC2 mode EC2 instances are used to run containers and when you create the cluster you specify an initial size which controls the number of container instances and this is handled by an auto scaling group.
We haven't covered auto scaling groups yet in the course but there are ways that you can control horizontal scaling for EC2 instances so adding more instances when requirements dictate and removing them when they're not needed.
But for this example let's say that we have four container instances.
Now these are just EC2 instances you will see them in your account, you'll build for them, you can even connect to them.
So it's important to understand that when these are provisioned you will be paying for them regardless of what containers you have running on them.
So with EC2 cluster mode you are responsible for these EC2 instances that are acting as container hosts.
Now ECS provisions these EC2 container hosts but there is an expectation that you will manage them generally through the ECS tooling.
So ECS using EC2 mode is not a serverless solution you need to worry about capacity and availability for your cluster.
ECS uses container registries and these are where your container images are stored.
Remember in a previous lesson I showed you how to store the container of cats images on Docker Hub and that's an example of a container registry.
AWS of course have their own which is called ECR I've previously mentioned that and you can choose to use that or something public like Docker Hub.
Now in the previous lesson I spoke about tasks and services which are how you direct ECS to run your containers.
Well tasks and services use images on container registries and via the task and service definitions inside ECS container images are deployed onto container hosts in the form of containers.
Now in EC2 mode ECS will handle certain elements of this so ECS will handle the number of tasks that are deployed if you utilize services and service definitions but at a cluster level you need to be aware of and manage the capacity of the cluster because the container instances are not something that's delivered as a managed service they are just EC2 instances.
So ECS using EC2 mode offers a great middle ground if you want to use containers in your infrastructure but you absolutely need to manage the container hosts capacity and availability then EC2 mode is for you because EC2 mode uses EC2 instances then if your business has reserved instances then you can use those you can use EC2 spot pricing but you need to manage all of this yourself.
It's important to understand that with EC2 mode even if you aren't running any tasks or any services on your EC2 container hosts you are still paying for them while they're in a running state so you're expected to manage the number of container hosts inside an EC2 based ECS cluster.
So whilst ECS as a product takes away a lot of the management overhead of using containers in EC2 cluster mode you keep some of that overhead and some flexibility so it's a great middle ground.
Fargate mode for ECS removes even more of the management overhead of using containers within AWS.
With Fargate mode you don't have to manage EC2 instances for use as container hosts as much as I hate using the term serverless Fargate is a cluster model which means you have no service to manage because of this you aren't paying for EC2 instances regardless of whether you're using them or not.
Fargate mode uses the same surrounding technologies so you still have the Fargate service handling schedule and orchestration cluster management and placement and you still use registries for the container images as well as use task and service definitions to define tasks and services.
What differs is how containers are actually hosted.
Core to the Fargate architecture is the fact that AWS maintain a shared Fargate infrastructure platform.
This shared platform is offered to all users of Fargate but much like how EC2 isolates customers from each other so does Fargate.
You gain access to resources from a shared pool just like you can run EC2 instances on shared hardware but you have no visibility of other customers.
With Fargate you use the same task and service definitions and these define the image to use, the ports and how much resources you need but with Fargate these are then allocated to the shared Fargate platform.
You still have your VPC, a Fargate deployment still uses a cluster and a cluster uses a VPC which operates in availability zones.
In this example AZA and AZB.
Where it starts to differ though is four ECS tasks which remember they're now running on the shared infrastructure but from a networking perspective they're injected into your VPC.
Each of the tasks is injected into the VPC and it gets given an elastic network interface.
This has an IP address within the VPC.
At that point they work just like any other VPC resource and they can be accessed from within that VPC and from the public internet if the VPC is configured that way.
So this is really critical for you to be aware of.
Tasks and services are actually running from the shared infrastructure platform and then they're injected into your VPC.
They're given network interfaces inside a VPC and it's using these network interfaces in that VPC that you can access them.
So if the VPC is configured to use public subnets which automatically allocate an IP version for address then tasks and services can be given public IP version for addressing.
Fargate offers a lot of customizability you can deploy exactly how you want into either a new VPC or a custom VPC that you have designed and implemented in AWS.
With Fargate mode because tasks and services are running from the shared infrastructure platform you only pay for the containers that you're using based on the resources that they consume.
So you have no visibility of any host costs.
You don't need to manage hosts, provision hosts or think about capacity and availability.
That's all handled by Fargate.
You simply pay for the container resources that you consume.
Now one final thing before we move to a demo where we're going to implement a container inside a Fargate architecture.
For the exam and as a solutions architect in general you should be able to advise when a business or a team should use ECS.
There are actually three main options.
Using EC2 natively for an application.
So deploying an application as a virtual machine.
Using ECS in EC2 mode so using a containerized application but running in ECS using an EC2 based cluster or using a containerized application running in ECS but in Fargate mode.
So there's a number of different options.
Picking between using EC2 and ECS should in theory be easy.
If you use containers then pick ECS.
If you're a business which already uses containers for anything then it makes sense to use ECS.
In the demo that we did earlier in this section we used an EC2 instance and Docker to create a Docker image.
That's an edge case though.
If you're wanting to just quickly test containers you can use EC2 as a Docker host but for anything production usage it's almost never a good idea to do that.
The normal options are generally to run an application inside an operating system inside EC2 or to utilize ECS in one of these two different modes.
Containers in general make sense if you're just wanting to isolate your applications.
Applications which have low usage levels.
Applications which all use the same OS.
Applications where you don't need the overhead of virtualization.
You would generally pick EC2 mode when you have a large workload and your business is price conscious.
If you care about price more than effort you'll want to look at using spot pricing or reserved pricing or make use of reservations that you already have.
Running your own fleet of EC2 based ECS hosts will probably be cheaper but only if you can minimize the admin overhead of managing them.
So scaling, sizing as well as correcting any faults.
So if you have a large consistent workload if you're heavily using containers but if you are a price conscious organization then potentially pick EC2 mode.
If you're overhead conscious even with large workloads then Fargate makes more sense.
Using Fargate is much less management overhead versus EC2 mode because you don't have any exposure to container hosts or their management.
So even large workloads if you care about minimizing management overhead then use Fargate.
For small or burst style workloads Fargate makes sense because with Fargate you only pay for the container capacity that you use.
Having a fleet of EC2 based container hosts running constantly for non-consistent workloads just makes no sense.
It's wasting the capacity.
The same logic is true for batch or periodic workloads.
Fargate means that you pay for what you consume.
EC2 mode would mean paying for the container instances even when you don't use them.
Okay I hope this starts to make sense.
I hope the theory that we've covered starts to give you an impression for when you would use ECS and then when you do use the product how to distinguish between scenarios which suit EC2 mode versus Fargate mode.
So next up we have a demo lesson and I'm going to get you to take the container of cats docker image that we created together earlier in this section and run it inside an ECS Fargate cluster.
By configuring it practically it's going to help a lot of the facts and architecture points that we've discussed through this section stick and these facts sticking will be essential to being able to answer any container based questions in the exam.
I know the container of cats is a simple example but the steps that you'll be performing will work equally well in something that's a lot more complex.
As we go through the course we're going to be revisiting ECS.
It will feature in some architectures for the animals for life business later in the course.
For now though I want you to just be across the fundamentals.
Enough to get started with ECS and enough for the associate AWS exams.
So go ahead complete this lesson and when you're ready you can join me in the next lesson which will be an ECS Fargate demo.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to introduce the Elastic Container Service or ECS.
In the previous lesson you created a Docker image and tested it by running up a container, all using the Docker container engine running on an EC2 instance.
And this is always something that you can do with AWS.
But ECS is a product which allows you to use containers running on infrastructure which AWS fully manage or partially manage.
It takes away much of the admin overhead of self managing your container hosts.
ECS is to containers what EC2 is to virtual machines.
And ECS uses clusters which run in one of two modes.
EC2 mode which uses EC2 instances as container hosts.
And you can see these inside your account.
They're just normal EC2 hosts running the ECS software.
We've also got Fargate mode which is a serveless way of running Docker containers where AWS manage the container host part and just leave you to define and architect your environment using containers.
Now in this lesson I'll be covering the high level concepts of the product which apply to both of those modes.
And then in the following lesson I'll talk in a little bit more detail about EC2 mode and Fargate mode.
So let's jump in and get started.
ECS is a service that accepts containers and some instructions that you provide and it orchestrates where and how to run those containers.
It's a managed container based compute service.
I mentioned this a second ago but it runs in two modes, EC2 and Fargate which radically changes how it works under the surface.
But for what I need to talk about in this lesson we can be a little abstract and say that ECS lets you create a cluster.
I'll cover the different types of cluster architectures in the following lesson.
For now it's just a cluster.
Clusters are where your containers run from.
You provide ECS with a container image and it runs that in the form of a container in the cluster based on how you want it to run.
But let's take this from the bottom up architecturally and just step through how things work.
First you need a way of telling ECS about container images that you want to run and how you want them to be run.
Containers are all based on container images as we talked about earlier in this section.
These container images will be located on a container registry somewhere and you've seen one example of that with the Docker Hub.
Now AWS also provide a registry.
It's called the Elastic Container Registry or ECR and you can use that if you want.
ECR has a benefit of being integrated with AWS so all of the usual permissions and scalability benefits apply.
But at its heart it is just a container registry.
You can use it or use something else like Docker Hub.
To tell ECS about your container images you create what's known as a container definition.
The container definition tells ECS where your container image is.
Logically it needs that.
It tells ECS which port your container uses.
Remember in the demo we exposed port 80 which is HTTP and so this is defined in the container definition as well.
The container definition provides just enough information about the single container that you want to define.
Then we have task definitions and a task in ECS represents a self-contained application.
A task could have one container defined inside it or many.
A very simple application might use a single container just like the container of Katz application that we demoed in the previous lesson.
Or it could use multiple containers, maybe a web app container and a database container.
A task in ECS represents the application as a whole and so it stores whatever container definitions are used to make up that one single application.
I remember the difference by thinking of the container definition as just a pointer to where the container is stored and what port is exposed.
The rest is defined in the task definition.
At the associate level this is easily enough detail but if you do want extra detail on what's stored in the container definition versus the task definition I've included some links attached to this lesson which give you an overview of both.
Task definitions store the resources used by the task so CPU and memory.
They store the networking mode that the task uses.
They store the compatibility so whether the task will work on EC2 mode or Fargate.
And one of the really important things which the task definition stores is the task role.
A task role is an IAM role that a task can assume and when the task assumes that role it gains temporary credentials which can be used within the task to interact with AWS resources.
Task roles the best practice way of giving containers within ECS permissions to access AWS products and services and remember that one because it will come up in at least one exam question.
When you create a task definition within the ECS UI you actually create a container definition along with it but from an architecture perspective I definitely wanted you to know that they're actually separate things.
This is further confused by the fact that a lot of tasks that you create inside ECS will only have one container definition and that's going to be the case with the container of Katz demo that we're going to be doing at the end of this section when we deploy our Docker container into ECS.
But tasks and containers are separate things.
A task can include one or more containers.
A lot of tasks do include one container which doesn't help with the confusion.
Now a task in ECS it doesn't scale on its own and it isn't by itself highly available and that's where the last concept that I want to talk to you about in this lesson comes in handy and that's called an ECS service and you configure that via a service definition.
A service definition defines a service and a service is how for ECS we can define how we want a task to scale how many copies we'd like to run.
It can add capacity and it can add resilience because we can have multiple independent copies of our task running and you can deploy a load balancer in front of a service so the incoming load is distributed across all of the tasks inside a service.
So for tasks that you're running inside ECS the long running and business critical you would generally use a service to provide that level of scalability and high availability.
It's the service that lets you configure replacing failed tasks or scaling or how to distribute load across multiple copies of the same task.
Now we're not going to be using a service when we demo the container of cats demo at the end of this section because we'll only be wanting to run a single copy.
You can run a single copy of a task on its own but it's the service wrapper that you use if you want to configure scaling and high availability.
And it's tasks or services that you deploy into an ECS cluster and this applies equally to whether it's EC2 based or Fargate based.
I'll be talking about the technical differences between those two in the next lesson.
But for now the high level building blocks are the same.
You create a cluster and then you deploy tasks or services into that cluster.
Now just to summarize a few of the really important points that I've talked about in this lesson.
First is the container definition.
This defines the image and the ports that will be used for a container.
It basically points at a container image that's stored on a container registry and it defines which ports are exposed from that container.
It does other things as well and I've included a link attached to this lesson which gives you a full overview of what's defined in the container definition.
But at the associate level all you need to remember is a container definition defines which image to use for a container and which ports are exposed.
A task definition applies to the application as a whole.
It can be a single container, so a single container definition or multiple containers and multiple container definitions.
But it's also the task definition where you specify the task role, so the security that the containers within a task get.
What can they access inside AWS?
It's an IAM role that's assumed and the temporary credentials that you get are what the containers inside the task can use to access AWS products and services.
So task definitions include this task role, the containers themselves, and you also specify at a task definition level the resources that your task is going to consume.
And I'll be showing you that in the demo lesson at the end of this section.
The task role, obviously I've just talked about, is the IAM role that's assumed by anything that's inside the task.
So the task role can be used by any of the containers running as part of a task.
And that's the best practice way that individual containers can access AWS products and services.
And then finally we've got services and service definitions and this is how you can define how many copies of a task you want to run.
And that's both for scaling and high availability.
So you can use a service and define that you want, say, five copies of a task running.
You can put a load balancer in front of those five individual tasks and distribute incoming load across those.
So it's used for scaling, it's used for high availability and you can control other things inside a service such as restarts, the certain monitoring features that you've got access to in there.
And services are generally what you use if it's a business critical application or something in production that needs to cope with substantial incoming load.
In the demo that's at the end of this section, we won't be using a service, we'll be just deploying a task into our ECS cluster.
With that being said, though, that's all the high level ECS concepts that I wanted to talk about in this lesson.
It's just enough to get you started so the next lesson makes sense.
And then so when you do the demo and get some practical experience with ECS, everything will start to click.
At this point, though, go ahead, complete this video.
And when you're ready, you can join me in the next lesson where I'll be talking about the different ECS cluster modes.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This section will be focusing on another type of compute, container computing.
To understand the benefits of the AWS products and services which relate to containers, you'll need to understand what containers are and what benefits container computing provides.
In this lesson, I aim to teach you just that.
It's all theory in this lesson, but immediately following this is a demo lesson where you'll have the chance to make a container yourself.
We've got a lot to get through though, so let's jump in and get started.
Before we start talking about containers, let's set the scene.
What we refer to as virtualization should really be called operating system or OS virtualization.
It's the process of running multiple operating systems on the same physical hardware.
I've already covered the architecture earlier in the course, but as a refresher, we've got an AWS EC2 host running the Nitro hypervisor.
Running on this hypervisor, we have a number of virtual machines.
Part of this lesson's objectives is to understand the difference between operating system virtualization and containers.
And so the important thing to realize about these virtual machines is that each of them is an operating system with associated resources.
What's often misunderstood is just how much of a virtual machine is taken up by the operating system alone.
If you run a virtual machine with say 4GB of RAM and a 40GB disk, the operating system can easily consume 60 to 70% of the disk and much of the available memory, leaving relatively little for the applications which run in those virtual machines as well as the associated runtime environments.
So with the example on screen now, it's obvious that the guest operating system consumes a large percentage of the amount of resource allocated to each virtual machine.
Now what's the likelihood with the example on screen that many of the operating systems are actually the same?
Think about your own business servers, how many run Windows, how many run Linux, how many do you think share the same major operating system version.
This is duplication.
On this example, if all of these guest operating systems used the same or similar operating system, it's wasting resources, it's duplication.
And what's more, with these virtual machines, the operating system consumes a lot of system resources.
So every operation that relates to these virtual machines, every restart, every stop, every start is having to manipulate the entire operating system.
If you think about it, what we really want to do with this example is to run applications one through to six in separate isolated protected environments.
To do this, do we really need six copies of the same operating system taking up disk space and host resources?
Well, the answer is no, not when we use containers.
Containerization handles things much differently.
We still have the host hardware, but instead of virtualization, we have an operating system running on this hardware.
Running on top of this is a container engine.
And you might have heard of a popular one of these called Docker.
A container in some ways is similar to a virtual machine in that it provides an isolated environment which an application can run within.
But where virtual machines run a whole isolated operating system on top of a hypervisor, a container runs as a process within the host operating system.
It's isolated from all of the other processors, but it can use the host operating system for a lot of things like networking and file I/O.
For example, if the host operating system was Linux, it could run Docker as a container engine.
Linux plus the Docker container engine can run a container.
That container would run as a single process within that operating system, potentially one of many.
But inside that process, it's like an isolated operating system.
It has its own file systems isolated from everything else and it can run child processors inside it, which are also isolated from everything else.
So a container could run a web server or an application server and do so in a completely isolated way.
What this means is that architecturally, a container would look something like this, something which runs on top of the base OS and container engine, but consumes very little memory.
In fact, the only consumption of memory or disk is for the application and any runtime environment elements that it needs.
So libraries and dependencies.
The operating system could run lots of other containers as well, each running an individual application.
So using containers, we achieve this architecture, which looks very much like the architecture used on the previous example, which use virtualization.
We're still running the same six applications.
But the difference is that because we don't need to run a full operating system for each application, the containers are much lighter than the virtual machines.
And this means that we can run many more containers on the same hardware versus using virtualization.
This density, the ability to run more applications on a single piece of hardware is one of the many benefits of containers.
Let's move on and look at how containers are architected.
I want you to start off by thinking about what an EC2 instance actually is.
And what it is is a running copy of its EBS volumes, its virtual disks.
An EC2 instance's boot volume is used.
It's booted and using this, you end up with a running copy of an operating system running in a virtualized environment.
A container is no different in this regard.
A container is a running copy of what's known as a Docker image.
Docker images are really special, though.
One of the reasons why they're really cool technology-wise is they're actually made up of multiple independent layers.
So Docker images are stacks of these layers and not a single monolithic disk image.
And you'll see why this matters very shortly.
Docker images are created initially by using a Docker file.
And this is an example of a simple Docker file which creates an image with a web server inside it ready to run.
So this Docker file creates this Docker image.
Each line in a Docker file is processed one by one and each line creates a new file system layer inside the Docker image that it creates.
Let's explore what this means and it might help to look at it visually.
All Docker images start off being created either from scratch or they use a base image.
And this is what this top line controls.
In this case, the Docker image we're making uses CentOS 7 as its base image.
Now this base image is a minimal file system containing just enough to run an isolated copy of CentOS.
All this is is a super thin image of a disk.
It just has the basic minimal CentOS 7 base distribution.
And so that's what the first line of the Docker file does.
It instructs Docker to create our Docker image using as a basis this base image.
So the first layer of our Docker image, the first file system layer is this basic CentOS 7 distribution.
The next line performs some software updates and it installs our web server, Apache in this case.
And this adds another layer to the Docker image.
So now our image is two layers, the base CentOS 7 image and a layer which just contains the software that we've just installed.
This is critical in Docker, the file system layers that make up a Docker image and normally read only.
So every change you make is layered on top as another layer.
Each layer contains the differences made when creating that layer.
So then we move on in our Docker file and we have some slight adjustments made at the bottom.
It's adding a script which creates another file system layer for a total of three.
And this is how a Docker image is made.
It starts off either from scratch or using a base layer and then each set of changes in the Docker file adds another layer with just those changes in.
And the end result is a Docker image that we can use which consists of individual file system layers.
Now strictly speaking, the layers in this diagram are upside down.
A Docker image consists of layers stacked on each other starting with the base layer.
So the layer in red at the bottom and then the blue layer which includes the system updates and the web server should be in the middle and the final layer of customizations in green should be at the top.
It was just easier to diagram it in this way but in actuality it should be reversed.
Now let's look at what images are actually used for.
A Docker image is how we create a Docker container.
In fact, a Docker container is just a running copy of a Docker image with one crucial difference.
A Docker container has an additional read write file system layer.
File system layers.
So the layers that make up a Docker image by default, they're read only.
They never change after they're created.
And so this special read write layer is added which allows containers to run anything which happens in the container.
If log files are generated or if an application generates or reads data, that's all stored in the read write layer of the container.
Each layer is differential and so it stores only the changes made against it versus the layers below.
Together all stacked up they make what the container sees as a file system.
But here is where containers become really cool because we could use this image to create another container, container two.
This container is almost identical.
It uses the same three base layers.
So the CentOS 7 layer in red beginning AB, the web server and updates that are installed in the middle blue layer beginning 8-1 and the final customization layer in green beginning 5-7.
They're both the same in both containers.
The same layers are used so we don't have any duplication.
They're read only layers anyway and so there's no potential for any overwrites.
The only difference is the read write layer which is different in both of these containers.
That's what makes the container separate and keeps things isolated.
Now in this particular case if we're running two containers using the same base image then the difference between these containers could be tiny.
So rather than virtual machines which have separate disk images which could be tens or hundreds of gigs containers might only differ by a few meg in each of their read write layers.
The rest is reused between both of these containers.
Now this example has two containers but what if it had 200?
The reuse architecture that's offered by the way that containers do their disk images scales really well.
Disk usage when you have lots of containers is minimized because of this layered architecture and the base layers, the operating systems, they're generally made available by the operating system vendors generally via something called a container registry and a popular one of these is known as Docker Hub.
The function of a container registry is almost revealed in the name.
It's a registry or a hub of container images.
As a developer or architect you make or use a Docker file to create a container image and then you upload that image to a private repository or a public one such as the Docker Hub and for public hubs other people will likely do the same including vendors of the base operating system images such as the CentOS example I was just talking about.
From there these container images can be deployed to Docker hosts which are just servers running a container engine in this case Docker.
Docker hosts can run many containers based on one or more images and a single image can be used to generate containers on many different Docker hosts.
Remember a container is a single thing your eye could take a container image and both use that to generate a container so that's one container image which can generate many containers and each of these are completely unique because of this read write layer that a container gets the solo use of.
Now you can use the Docker Hub to download container images but also upload your own.
Private registries can require authentication but public ones are generally open to the world.
Now I have to admit I have a bad habit when it comes to containers.
I'm usually all about precision in the words that I use but I've started to use Docker and containerization almost interchangeably.
In theory a Docker container is one type of container a Docker host is one type of container host and the Docker Hub is a type of container hub or a type of container registry operated by the company Docker.
Now even I start to use these terms interchangeably I'll try not to but because of the popularity of Docker and Docker containers you will tend to find that people say Docker when they actually mean containers so keep an eye out for that one.
Now the last thing before we finish up and go to the demo I just want to cover some container key concepts just as a refresher.
You've learned that Docker files are used to build Docker images and Docker images are these multi-layer file system images which are used to run containers.
Containers are a great tool for any solutions architect because they're portable and they always run as expected.
If you're a developer and you have an application if you put that application and all of its libraries into a container you know that anywhere that there is a compatible container host that that application can run exactly as you intended with the same software versions.
Portability and consistency are two of the main benefits of using containerized computing.
Containers and images are super lightweight they use the host operating system for the heavy lifting but are otherwise isolated.
Layers used within images can be shared and images can be based off other images.
Layers are read only and so an image is basically a collection of layers grouped together which can be shared and reused.
If you have a large container environment you could have hundreds or thousands of containers which are using a smaller set of container images and each of those images could be sharing these base file system layers to really save on capacity so if you've got larger environments you could significantly save on capacity and resource usage by moving to containers.
Containers only run what's needed so the application and whatever the application itself needs.
Containers run as a process in the host operating system and so they don't need to be a full operating system.
Containers use very little memory and as you will see they're super fast to start and stop and yet they provide much of the same level of isolation as virtual machines so if you don't really need a full and isolated operating system you should give serious thought to using containerization because it has a lot of benefits not least is the density that you can achieve using containers.
Containers are isolated and so anything running in them needs to be exposed to the outside world so containers can expose ports such as TCP port 80 which is used for HTTP and so when you expose a container port the services that that container provides can be accessed from the host and the outside world and it's important to understand that some more complex application stacks can consist of multiple containers.
You can use multiple containers in a single architecture either to scale a specific part of the application or when you're using multiple tiers so you might have a database container you might have an application container and these might work together to provide the functionality of the application.
Okay so that's been a lot of foundational theory and now it's time for a demo.
In order to understand AWS's container compute services you need to understand how containers work.
This lesson has been the theory and the following demo lesson is where you will get some hands-on time by creating your own container image and container.
It's a fun way to give you some experience so I can't wait to step you through it.
At this point they'll go ahead and finish this video and when you're ready you can join me in the demo lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to talk about a really important feature of EC2 called Instance Metadata.
It's a very simple architecture, but it's one that's used in many of EC2's more powerful features.
So it's essential that you understand its architecture fully.
It features in nearly all of the AWS exams and you will use it often if you design and implement AWS solutions in the real world.
So let's jump in and get started.
The EC2 Instance Metadata is a service that EC2 provides to instances.
It's data about the instance that can be used to configure or manage a running instance.
It's a way the instance or anything running inside the instance can access information about the environment that it wouldn't be able to access otherwise.
And it's accessible inside all instances using the same access method.
The IP address to access the instance metadata is 169.254.169.254.
Remember that IP.
It comes up all the time in exams.
Make sure it sticks.
I'll repeat it as often as I can throughout the course, but it's unusual enough that it tends to stick pretty well.
Now, the way that I've remembered the IP address from when I started with AWS is just to keep repeating it.
Repetition always helps.
And I remember this one as a little bit of a rhyme.
169.254 repeated.
And if you just keep repeating that over and over again, then the IP address will stick.
So 169.254 repeated equals 169.254.169.254.
And then for the next part of the URL, I always want the latest meta-data.
If you remember 169.254 repeated and you always want the latest meta-data, it will tend to stick in your mind.
At least it did for me.
Now, I've seen horrible exam questions which make you actually select the exact URL for this metadata.
So this is one of those annoying facts that I just need you to memorize.
I promise you it will help you with exam questions in the exam.
So try to memorize the IP and latest meta-data.
If you remember both of those, keep repeating them.
Get annoying over and over again.
Write them on flashcards.
It will help you in the exam.
Now, the metadata allows anything running on the instance to query it for information about that instance and that information is divided into categories.
For example, host name, events, security groups and much more.
All information about the environment that the instance is in.
The most common things which can be queried though are information on the networking and I'll show you this in the demo part of this lesson.
While the operating system of an instance can't see any of its IP version for public addressing, the instance meta-data can be used by applications running on that instance to get access to that information and I'll show you that soon.
You can also gain access to authentication information.
We haven't covered EC2 instance roles yet, but instances can be themselves given permissions to access AWS resources and the meta-data is how applications on the instance can gain access to temporary credentials generated by assuming the role.
The meta-data service is also used by AWS to pass in temporary SSH keys.
So when you connect to an instance using EC2 instance connect, it's actually passing in an SSH key behind the scenes that's used to connect.
The meta-data service is also used to grant access to user data and this is a way that you can make the instance run scripts to perform automatic configuration steps when you launch an instance.
Now one really important fact for the exam and I've seen questions come up on this one time and time again, the meta-data service, it has no authentication and it's not encrypted.
Anyone who can connect to an instance and gain access to the Linux command line shell can by default access the meta-data.
You can restrict it with local firewall rules so blocking access to the 169254 repeated IP address, but that's extra per instance admin overhead.
In general, you should treat the meta-data as something that can and does get exposed.
Okay, well that's the architecture, it's nice and simple, but this is one of the things inside AWS which is much easier to show you than to tell you about.
So it's time for a demo and we're going to perform a demo together which uses the instance meta-data of an EC2 instance.
So let's switch over to the console and get started.
Now if you do want to follow along with this in your own environment, then you'll need to apply some infrastructure.
Before you do that, just make sure that you're logged in to the general AWS account, so the management account of the organization, and make sure as always that you have the Northern Virginia region selected.
Now this lesson has a one-click deployment link attached to it, so go ahead and click that link.
This will take you to the quick create stack screen.
You should see that the stack name is called meta-data, just scroll down to the bottom, check this box and click on create stack.
Now this will automatically create all of the infrastructure which we'll be using, so you'll need to wait for this stack to move into a create complete state.
We're also going to be using some commands within this demo lesson and also attached to this lesson is a lesson commands document which includes all of the commands that you'll be using.
So this will help you avoid errors.
You can either type these out manually or copy and paste them as I do them in the demo.
So at this point go ahead and open that link as well.
It should look something like this.
There's not that many commands that we'll be using, but they are relatively long and so by using this document we can avoid any typos.
Now just refresh this stack.
Again it will need to be in a create complete state, so go ahead and pause this video, wait for the stack to move into create complete and then we good to continue.
Okay so now the stacks moved into a create complete state and if you just go ahead and click on resources you can see that it's created a selection of resources.
Now the one that we're concerned with is public EC2 which is an EC2 instance running in a public subnet with public IP addressing.
So we're going to go ahead and interact with this instance.
So click on services and then go ahead and move to the EC2 console.
You can either select it in all services, recently visited if you've used this service before or you can type EC2 into the search box and then open it in a new tab.
Once you're at the EC2 console go ahead and click on instances running and you should see this single EC2 instance.
Go ahead and select it and I just want to draw your attention to a number of key pieces of information which I want you to note down.
So first you'll be able to see that the instance has a private IP version 4 address.
Yours may well be different if you're doing this within your own environment.
You'll also see that the instance has a public IP version 4 address and again if you're doing this in your environment yours will be different.
Now if you click on networking you'll be able to see additional networking information including the IP version 6 address that's allocated to this instance.
Now the IP version 6 address is always public and so there's no concept of public and private IP version 6 addresses but you'll be able to see that address under the networking tab.
Now just to make this easier just go ahead and note down the IP version 6 address as well as the public IP version 4 DNS which is listed as well as the public IP version 4 address which is listed at the top and then the private IP version 4 address.
And once you've got all these noted down we're going to go ahead and connect to this instance.
So right click select connect we're going to use EC2 instance connect so make sure that the username is EC2 hyphen user and then connect to this instance.
Now once we connected straight away we'll be able to see how even the prompt of the instance makes visible the private IP version 4 address of this EC2 instance and if we run the Linux command of if config and press enter we'll get an output of the network interfaces within this EC2 instance.
Now we'll be able to see the private IP version 4 address listed within the configuration of this network interface inside the EC2 instance and if you're performing this in your own environment notice how it's exactly the same as the private IP version 4 address that you just noted down which was visible inside the console UI.
So in my case you'll be able to see these two IP addresses match perfectly.
So this IP address that's visible in the console UI is the same as this private IP address configured on the network interface inside the instance.
The same is true of the IP version 6 IP address.
This is also visible inside the operating system on the network configuration for this network interface and again that's the same IP version 6 address which is visible on the networking tab inside the console UI.
So that's the same as this address.
What isn't visible inside the instance operating system on the networking configuration is the public IP version 4 address.
It's critical to know that at no point ever during the life cycle of an EC2 instance is a public IP version 4 address configured within the operating system.
The operating system has no exposure to the public IP version 4 address.
That is performed by the internet gateway.
The internet gateway translates the private address into a public address.
So while IP version 6 is configured inside the operating system, IP version 4 public addresses are not.
The only IP version 4 addresses that an instance has are the private IP addresses and that's critical to understand.
Now as I talked about in the theory component of this lesson, the EC2 metadata service is a service which runs behind all of the EC2 instances within your account and it's accessible using the metadata IP address.
Now we can access this by using the curl utility.
Now curl is installed on the EC2 instance that we're using for this demo.
Now we're going to query the metadata service for one particular attribute and that attribute is the public IP version 4 address of this instance.
So because the instance operating system has no knowledge of the public IP address, we can use the metadata service to provide any scripts or applications running on this instance with visibility of this public IP version 4 address and we do that using this command.
So this uses curl to query the metadata service which is 169.254.169.254.
I refer to this as 169.254 repeating.
So it queries this IP address and then forward slash latest forward slash meta hyphen data and this is the metadata URL, this entire part, the IP address, then latest, then meta hyphen data and then at the end we specify the attribute which we want to retrieve which is public hyphen IPv4 and if we press enter then curl is going to contact the metadata service and retrieve the public IP version 4 address of this EC2 instance.
So in my case this is the IP address and if I go back to the console this matches the address that's visible within the console UI.
So if I just clear the screen to make it easier to see we can also use the same command structure again but this time query for the public host name of this EC2 instance.
We use the same URL so IP address and path but this time we query for public hyphen host name and this will give us the IPv4 public DNS of this EC2 instance.
So again I'm going to clear the screen to make it easier to see.
Now we can make this process even easier.
We can use the AWS instance metadata query tool and to download it we use this command so enter it and press enter.
This is just downloaded the tool directly so if we do a listing to list the current folder we can see the EC2 hyphen metadata tool because this is Linux we need to make it so that this tool is executable.
We do that with the chmod command so enter that and press enter and then we can run the EC2 hyphen metadata tool and we can use double hyphen help to display help for this product.
So this shows all the different information that we can use this tool to query for and this just makes it easier to query the metadata service especially if the query is being performed by users running interactively on that EC2 instance.
So for example we could run EC2 hyphen metadata space hyphen a to show the AMI ID that's used to launch this instance and in this case it's the AMI for Amazon Linux 2 inside the US hyphen east hyphen one region at least at the time of creating this demo video.
If we need to show the availability zone that this instance is in we could use EC2 hyphen metadata space hyphen Z in this case the instance is in US hyphen east hyphen one a and we can even use EC2 hyphen metadata space hyphen s to show any security groups which were launched with this instance.
Now you can carry on exploring this tool if you want there are plenty of other pieces of information which are accessible using the metadata tool.
I just wanted to give you a brief introduction show you how to download it how to make it executable and how to run some of the basic options.
Now at this point that's everything I wanted to cover in this brief demo component of this lesson.
I wanted to give you some exposure to how you can interact with the metadata service which I covered from a theory perspective earlier in this lesson.
Now at this point we need to clear up all of the infrastructure that we've used for this demo component so close down this tab go back to the AWS console move to cloud formation select the metadata stack select delete and then confirm it and that will clear up all of the infrastructure that we've used and return the account into the same state as it was at the start of this demo component of this video.
Now at that point that's everything I wanted to cover you've learned about the theory of the metadata service as well as experienced how to interact with it from a practical perspective.
So go ahead and complete this video and when you're ready I'll look forward to you joining me
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to cover a little bit more theory.
It's something which you'll need to understand from now on in the course because the topics that we'll discuss and the examples that we'll use to fully understand those topics will become ever more complex.
Horizontal and vertical scaling are two different ways that a system can scale to handle increasing or in some cases decreasing load placed on that system.
So let's quickly step through the difference and look at some of the pros, cons and requirements of each.
Scaling is what happens when systems need to grow or shrink in response to increases or decreases of load placed upon them by your customers.
From a technical perspective, you're adding or removing resources to a system.
A system can in some cases be a single compute device such as an EC2 instance, but in some cases could be hundreds, thousands, tens of thousands, or even hundreds of thousands, or more of individual devices.
Vertical scaling is one way of achieving this increase in capacity, so this increase of resource allocation.
The way it works is simple.
Let's say for example we have an application and it's running on an EC2 instance and let's say that it's a T3.large, which provides two virtual CPUs and eight GIB of memory.
The instance will service a certain level of incoming load from our customers, but at some point assuming the load keeps increasing, the size of this instance will be unable to cope and the experience for our customers will begin to decrease.
Customers might experience delays, unreliability, or even outright system crashes.
So the commonly understood solution is to use a bigger server.
In the virtual world of EC2, this means resizing the EC2 instance.
We have lots of sizes to choose from.
We might pick a T3.extralarge, which doubles the virtual CPU and memory, or if the rate of increase is significant, we could go even further and pick another size up, a T3.2 XL, which doubles that again to eight virtual CPUs and 32 GIB of memory.
Let's talk about a few of the finer points though of vertical scaling.
When you're actually performing vertical scaling with EC2, you're actually resizing an EC2 instance when you scale.
And because of that, there's downtime, often a restart during the resize process, which can potentially cause customer disruption.
But it goes beyond this because of this disruption, it means that you can only scale generally during pre-agreed times, so within outage windows.
If incoming load on a system changes rapidly, then this restriction of only being able to scale during outage windows limits how quickly you can react, how quickly you can respond to these changes by scaling up or down.
Now as load increases on a system, you can scale up, but larger instances often carry a price premium.
So the increasing cost going larger and larger is often not linear towards the top end.
And because you're scaling individual instances, there's always going to be an upper cap on performance.
And this cap is the maximum instance size.
While AWS are always improving EC2, there will always be a maximum possible instance size.
And so with vertical scaling, this will always be the cap on the scaling of an individual compute resource.
Now there are benefits of vertical scaling.
It's really simple and it doesn't need any application modification.
If an application can run on an instance, then it can run on a bigger instance.
Vertical scaling works for all applications, even monolithic ones, where the whole code base is one single application because it all runs on one instance and that one instance can increase in size.
Horizontal scaling is designed to address some of the issues with vertical scaling.
So let's have a look at that next.
Horizontal scaling is still designed to cope with changes to incoming load on a system.
But instead of increasing the size of an individual instance, horizontal scaling just adds more instances.
The original one instance turns into two as load increases, maybe two more added.
Eventually, maybe eight instances are required.
As the load increases on a system, horizontal scaling just adds additional capacity.
The key thing to understand with horizontal scaling is that with this architecture, instead of one running copy of your application, you might have two or 10 or hundreds of copies, each of them running on smaller compute instances.
This means they all need to work together, all need to take their share of incoming load placed on the system by customers.
And this generally means some form of load balancer.
And a load balancer is an appliance which sits between your servers, in this case instances, and your customers.
When customers attempt to access the system, all that incoming load is distributed across all of the instances running your application.
Each instance gets a fair amount of the load and for a given customer, every mouse click, every interaction with the application, could be on the same instance or randomized across any of the available instances.
Horizontal scaling is great, but there are a few really important things that you need to be aware of as a solutions architect.
When you think about horizontal scaling, sessions are everything.
When you log into an application, think about YouTube, about Netflix, about your email.
The state of your interaction with that application is called a session.
You're using this training site right now and if I deleted your session right at this moment, then the next time you interacted with the site, you would be logged out.
You might lose the position of the video that you're currently watching.
On amazon.com or your home grocery shopping site, the session stores what items are in your cart.
With a single application running on a single server, the sessions of all customers are generally stored on that server.
With horizontal scaling, this won't work.
If you're shopping on your home grocery site and you add some cat cookies to your cart, this might be using instance one.
When you add your weekly selection of donuts, you might be using instance 10.
Without changes, every time you moved between instances for a horizontally scaled application, you would have a different session or no session.
You would be logged out, the application put simply would be unusable.
With horizontal scaling, you can be shifting between instances constantly.
That's one of the benefits.
It evens out the load.
And so horizontal scaling needs either application support or what's known as off host sessions.
If you use off host sessions, then your session data is stored in another place, an external database.
And this means that the servers are what's called stateless.
They're just dumb instances of your application.
The application doesn't care which instance you connect to because your session is externally hosted somewhere else.
That's really the key consideration with horizontal scaling.
It requires thought and design so that your application supports it.
But if it does support it, then you get all of the benefits.
The first one of those benefits is that you have no disruption while you're scaling because all you're doing is just adding instances.
The existing ones aren't being impacted.
So customer connections remain unaffected.
Even if you're scaling in, so removing instances because sessions should be off host, so externally hosted, connections can be moved between instances, leaving customers unaffected.
So that's a really powerful feature of having externally hosted sessions together with horizontal scaling.
It means all of the individual instances are just dumb instances.
It doesn't matter to which instance a particular customer connects to at a particular time because the sessions are hosted externally.
They'll always have access to their particular state in the application.
And there's no real limits to horizontal scaling because you're using lots of smaller, more common instances.
You can just keep adding them.
There isn't the single instance size cap which vertical scaling suffers from.
Horizontal scaling is also often less expensive.
You're using smaller commodity instances, not the larger ones which carry a premium.
So it can be significantly cheaper to operate a platform using horizontal scaling.
And finally, it can allow you to be more granular in how you scale.
With vertical scaling, if you have a large instance and go to an extra large, which is one step above it, you're pretty much doubling the amount of resources allocated to that system.
With horizontal scaling, if you're currently using five small instances and you add one more, then you're scaling by around 20%.
The smaller instances that you use, the better granularity that you have with horizontal scaling.
Now, there's a lot more to this.
Later in the course, in the high availability and scaling section, I'll introduce elasticity and how we can use horizontal scaling as a component of highly available and fault tolerant designs.
But for now, I'll leave you with a visual exam power-up.
Visuals often make things easier to understand, and they help especially with memory recall.
So when it comes to remembering the different types of scaling methods, picture this, two types of scaling.
First, horizontal scaling, and this adds and remove things.
So if we're scaling Bob, one of our regular course guest stars, then scaling Bob in a horizontal way would mean moving to two Bob, which is scary enough.
But if the load required it, we might even have to move to four Bob.
And if we needed huge amounts of capacity, if four Bob wasn't enough, if you needed more and more and more, and even more Bob, then horizontal scaling has you covered.
There isn't really a limit.
We can scale Bob infinitely.
In this case, we can have so many Bob's.
We can scale Bob up to a near infinite level as long as we're using horizontal scaling.
Scaling Bob in a vertical way, that starts off with a small Bob, then moves to a medium Bob, and if we really need more Bob, then we can scale to a large Bob.
In the exam, when you're struggling to remember the difference between horizontal scaling and vertical scaling, picture this image.
I guarantee with this, you will not forget it.
But at this point, that's all of the theory that I wanted to cover.
Go ahead, complete the video, and when you're ready, you can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This lesson will be a pretty brief one.
We're going to be covering instant status checks and EC2 auto recovery.
They're both pretty simple features but let's quickly step through exactly what both of them do and what capabilities they offer because they're great things to understand.
Every instance within EC2 has two high-level per instance status checks.
When you initially launch an instance you might see these listed as initializing and then you might see only one of two passing but eventually all instances should move into the two out of two past checks which indicate that all is well within the instance.
If not you have a problem.
Each of the two checks represents a separate set of tests and so a failure of either of them suggests a different set of underlying problems.
The first status check is the system status.
The second is the instance status.
A failure of the system status check could indicate one of a few major problems.
Things like the loss of system power, loss of network connectivity or software or hardware issues with the EC2 host.
So this check is focused on issues impacting the EC2 service or the EC2 host.
The second check focuses on instances.
So a failure of this one could indicate things like a corrupt file system, incorrect networking on the instance itself.
So maybe you've statically set a public IP version 4 address on the internal interface of the operating system which you now know won't work ever or maybe the instance is having operating system kernel issues preventing it from correctly starting up.
Assuming that you haven't just launched an instance anything but two out of two checks represents an issue which needs to be resolved.
One way to handle it is manually.
So you could manually stop and start an instance or restart an instance or terminate and recreate an instance.
There are manual activities that you could perform but EC2 comes with a feature allowing you to recover automatically to any system check issues.
You can ask the EC2 stop a failed instance, reboot it, terminate it or you can ask EC2 to perform auto recovery.
Auto recovery moves the instance to a new host, starts it up with exactly the same configuration as before so all IP addressing is maintained and if software on the instance is set to auto start this process could mean that the instance as the name suggests automatically recovers fully from any failed status check issues.
Now it's far easier to show you exactly how this works so let me quickly switch over to my console.
Now I'm currently logged in to the general AWS account so the management account of the organization.
I'm using the I am admin user and I've currently got the Northern Virginia region selected.
Now the demo part of this lesson is going to be really brief and so it's probably not worth you following along in your own environment.
If you do want to though there is a one-click deployment that's attached to this lesson so if you're following along click that link, scroll to the bottom, check the acknowledgement box and click create stack.
It'll need to be in a create complete state before you continue so if you are following along pause the video, wait for your stack to move into create complete and then you're good to continue.
So now we're in a create complete state let's go ahead and move to the EC2 console.
This one-click deployment link has created a single instance so that should already be in place so go ahead and select it and then click on the status checks tab.
So if we select the status checks tab for this particular instance you'll see the system status checks and the instance status checks and for both of these my EC2 instance has passed so system reliability check has passed and instance reachability check has passed so that's good.
Now if we wanted to create a process which will be capable of auto recovery on this instance to do that we'd click on the actions drop down and then create status check alarm so go ahead and do that and that will open this dialogue.
So this is an alarm inside CloudWatch which will alarm if this instance fails any of these status checks.
What we can do as a default is to have it send a notification to an SNS topic and the conditions mean this notification will be sent whenever any status checks fail so either of the two for at least one consecutive period of five minutes so it will need to fail either of these status checks for one period in five minutes and then this alarm will be triggered.
Now we can also select this box which means that action will be taken in addition to notification being sent and we can have it reboot the instance so just the equivalent of an operating system reboot.
We can have it stop the instance which is useful if we want to perform any diagnostics.
We can have it terminate the instance now this is useful if you have any sort of high availability configured which I'll be demonstrating later in the course because what this means is if you terminate an instance you can configure EC2 to automatically reprovision a brand new instance in its place.
If we do this in isolation on a single instance it will simply terminate the instance and it won't be replaced but what I want to focus on is this option which is recover this instance.
This uses the auto recovery feature of EC2.
This feature will attempt to recover this instance it will take a set of actions it could be a simple restart or it could be to migrate the instance to a whole new EC2 host but importantly it would need to be in the same availability zone remember EC2 is an AZ based service and so logically this won't protect you against an entire AZ failure it will only take action for an isolated failure either the host or the instance.
Now this feature does rely on having spare EC2 host capacity so in the case of major failure in multiple availability zones in a region there is a potential that this won't work if there is not spare capacity.
You also need to be using modern types of instances so things like A1 C4 C5 M4 M5 R3 R4 R5.
I'll make sure I include a link in the lesson text which gives a full overview of all of the supported types and also this feature won't work if you're using instance store volume so it'll only work on instances which solely have EBS volumes attached.
It's a simple way that EC2 adds some automation which can attempt to recover an instance and avoid waking up a sysadmin.
It's not designed to automatically recover against large scale or complex system issues so do keep that in mind it's a very simple feature which answers a very narrow set of error-based scenarios it's not something that's going to fix every problem in EC2.
There are other ways of doing that which we'll talk about later in the course.
Now we are at the end of everything that I wanted to cover in this demo lesson but we're not going to clear up the infrastructure that we used because I'll be using it in the following demo lesson to demonstrate EC2 termination protection so if you are following along with these in your own environment then don't delete the one-click deployment that you used at the start of this lesson because we'll be using it in the following demo lesson.
So that's it though that's everything that I wanted to cover in this lesson I did promise it would be brief if go ahead and complete the video and when you're ready I'll see you in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In the previous lesson I talked about EC2 launch types and one of those types was reserved.
Historically there was only one type of reserved purchase option, but over time AWS have added newer, more flexible options and so reserved has become known as standard reserved.
As I talked about in the previous lesson, these are great for known long term consistent usage.
If you need access to the cheapest EC2 running 24/7/365 every day for one or three years, then you would pick standard reserved.
But you do have a number of other, more flexible options.
Scheduled reserved instances are pretty situational, but when faced with those situations offer some great advantages.
They're great for when you have long term requirements, but that requirement isn't something which needs to run constantly.
Let's say you have some batch processing which needs to run daily for five hours.
You know this usage is required every day, so it is long term.
It's known usage and so you're comfortable with the idea of locking in this commitment, but a standard reserved instance won't work because it's not running 24/7/365.
A scheduled reserved instance is a commitment.
You specify the frequency, the duration and the time.
In this case, 2300 hours daily for five hours.
You reserve the capacity and get that capacity for a slightly cheaper rate versus on demand, but you can only use that capacity during that time window.
Other types of situations might be weekly data, so sales analysis which runs every Friday for a 24 hour period, or you might have a larger analysis process which needs 100 hours of EC2 capacity per month, and so you purchase a scheduled reservation to cover that requirement.
You reserve capacity and get it slightly cheaper.
There are some restrictions.
It doesn't support all instance types or regions, and you need to purchase a minimum of 1200 hours per year.
While you are reserving partial time blocks, the commitment that you make to AWS is at least one year, so the minimum period that you can buy for scheduled reserved is one year.
This is ideal for long term known consistent usage, but where you don't need it for the full time period.
If you only need access to EC2 for specific hours in a day, specific days of the week, or blocks of time every month, then you should look at scheduled reserved instances.
Now let's move on because I want to discuss capacity reservations.
I mentioned earlier in the course that certain events such as major failures can result in a situation where there isn't enough capacity in a region or an availability zone.
In that situation, there's an order to things, a priority order which AWS use to deliver EC2 capacity.
First, AWS deliver on any commitments in terms of reserved purchases, and then once they've been delivered, then they can satisfy any on demand request, so this is priority number two, and then after both of those have been delivered, then any leftover capacity can be used via the spot purchase option.
So capacity reservations can be useful when you have a requirement for some compute which can't tolerate interruption.
If it's a business critical process which you need to guarantee that you can launch at a time that you need that compute requirement, then you definitely need to reserve the capacity.
Now capacity reservation is different from reserved instance purchase, so there are two different components.
There's a billing component and a capacity component, and both of these can be used in combination or individually.
There are situations where you might need to reserve some capacity, but you can't justify a long-term commitment to AWS in the form of a reserved instance purchase.
So let's step through a couple of the different options that we have available.
To illustrate this, we'll start with an AWS region and two availability zones, AZA and AZB.
Now when it comes to instance reservation and capacity, we have a few options.
First we could purchase a reservation but make it a regional one, and this means that we can launch instances into either AZ in that region, and they would benefit from the reservation in terms of billing.
So by purchasing a regional reservation, you get billing discounts for any valid instances launched into any availability zone in that region.
So while region reservations are flexible, they don't reserve capacity in any specific availability zone.
And so when you're launching instances, even if you have a regional reservation, you're launching them with the same priority as on-demand instances.
Now with reservations, you can be more specific and pick a zonal reservation.
Zonal reservations give you the same billing discount that are delivered using regional reservations, but they also reserve capacity in a specific availability zone.
But they only apply to that one specific availability zone, meaning if you launch instances into another availability zone in that region, you get neither benefit.
You pay the full price and don't get capacity reservations.
Whether you pick regional or zonal reservations, you still need to commit for either a one or three-year term to AWS, and there are just some situations where you're not able to do that.
If the usage isn't known, or if you're not sure about how long the usage will be required, then often you can't commit to AWS for a one or three-year reserved term purchase, but you still need to reserve the capacity.
So there is another option.
You can choose to use on-demand capacity reservations.
With on-demand capacity reservation, you're booking capacity in a specific availability zone, and you always pay for that capacity regardless of if you consume it.
So in the example that's on screen now, we're booking capacity within AZB for two EC2 instances, and we build for that capacity whether we consume it or not.
So right now with the example as on screen, we build for two EC2 instances.
What we can do is launch an instance into that capacity, and now we still build for two, but we're using one.
So if we don't consume any of that capacity in this case, if we only have the one EC2 instance, we're wasting billing for an entire EC2 instance.
So with capacity reservations, you still do need to do a planning exercise and plan exactly what capacity you require, because if you do book the capacity and don't use it, you will still incur the charge.
Now capacity reservations don't have the same one or three-year commitment requirements that you need for reserved instances.
You're not getting any billing benefit when using capacity reservations.
You're just, as the name suggests, reserving the capacity.
So at any point, you can book a capacity reservation if you know you need some EC2 capacity without worrying about the one or three-year term commitments, but you don't benefit from any cost reduction.
So if you're using capacity reservations for something that's consistent, you should look at a certain point to evaluate whether reserved instances are going to be more economical.
Now one last thing that I want to talk about before I finish this lesson is a feature called a savings plan.
And you can think of a savings plan as kind of like a reserved instance purchase, but instead of focusing on a particular type of instance in an availability zone or a region, you're making a one or three-year commitment to AWS in terms of hourly spend.
So you might make a commitment to AWS that you're going to spend $20 per hour for one or three years.
And in exchange for doing that, you get a reduction on the amount that you're paying for resources.
Now savings plans come in two main types.
You can make a reservation for general compute dollar amounts, and if you elect to create a general compute savings plan, then you can save up to 66% versus the normal on-demand price of various different compute services.
Or you can choose an EC2 savings plan which has to be used for EC2, but offers better savings up to 72% versus on-demand.
Now a general compute savings plan is valid for various different compute services, currently EC2, Fargate and Lambda.
Now the way that this works is products have their normal on-demand rate, so this is true for EC2, Fargate and Lambda, but those products also have a savings plan rate.
And the way that this works is when you're spending money on an hourly basis, if you have a savings plan, you get the savings plan rate up to the amount that you commit.
So if you've made a commitment of $20 per hour and you consume EC2, Fargate and Lambda, you'll get access to all three of those services at the savings plan rate until you've consumed that $20 per hour commitment.
And after that you start using the normal on-demand rate.
So a savings plan is an agreement between you and AWS where you commit to a minimum spend and in return AWS gives you cheaper access to any of the applicable resources.
If you go above your savings plan then you begin to consume the normal on-demand rate.
And over time generally you'd continually evaluate your resource usage within the account and adjust your savings plan usage as appropriate.
Now out of all the compute services available in AWS, if you only consume EC2, then you will get better savings by looking at an EC2 savings plan.
But if you're the type of organization that's evaluating how you can use emerging architectures such as containerization or serverless, then you can pick a general savings plan, commit to a certain hourly spend and then utilize that over the full range of supported AWS compute services.
And as a real-world hint and for the exam, this could allow an organization that's migrating away from EC2-based compute towards these emerging architectures to get cost-effective access to resources.
So they'd use a general compute savings plan providing access to EC2, Fargate and Lambda and over time migrate away from EC2 towards Fargate and then over the long-term potentially from Fargate through to Lambda and fully serverless architectures.
Now for the exam, you only need to be aware that savings plans exist and exactly how they work, but in the real world, you should definitely do a bit of extra reading around savings plans because they're a really powerful feature that can help you achieve significant cost savings.
With that being said though, that's everything I wanted to cover in this theory lesson.
So go ahead, complete the lesson and then when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Okay, so next let's look at reserved instances.
Reserved instances are really important.
They form a part of most larger deployments within AWS.
Where on demand is generally used for unknown or short-term usage which can't tolerate interruption, reserved is for long-term consistent usage of EC2.
Now reservations are simple enough to understand.
They're a commitment made to AWS for long-term consumption of EC2 resources.
So this is a normal instance, a T3 instance.
And if you utilize this instance, you'll build the normal per second rate because you don't have any reservations purchased which apply to this instance.
Now if this instance is something that you know that you need long term, if it's a core part of your system, then one option is to purchase a reservation.
And I'll talk about what this exactly means in a second.
But the effect of a reservation would be to reduce the per second cost or remove it entirely depending on what type of reservation you purchase.
As long as the reservation matches the instance, it would apply to that instance.
Either reducing or removing the per second price for that instance.
Now you need to be sure that you plan reservations appropriately because it's possible to purchase them and not use them.
In this case, if you have an unused reservation, you still pay for that reservation but the benefit is wasted.
Reservations can be purchased for a particular type of instance and locked to an availability zone specifically or to a region.
Now if you lock a reservation to an availability zone, it means that you can only benefit when launching instances into that availability zone.
But it also reserves capacity which I'll talk about in another lesson.
If you purchase reservations for a region, it doesn't reserve capacity but it can benefit any instances which are launched into any availability zone in that region.
Now it's also possible that reservations can have a partial effect.
So in the event that you have a reservation for say a T3.large instance and you provision a T3 instance which is larger than this, it would have a partial effect.
So you'd get a discount of a partial component of that larger instance.
Reservations at a high level are where you commit to AWS that you will use resources for a length of time.
In return for that commitment, you get those resources cheaper.
The key thing to understand is that once you've committed, you pay whether you use those resources or not.
And so use them wisely for parts of your infrastructure which are always there and never change.
Now there are some choices in the way that you pay for reservations.
First, the term.
You can commit to AWS either for one year or for three years.
The discounts that you receive if you commit for three years are greater than those for one year.
But so is the risk because over three years there's more chance that your requirements will change.
And so you have to be really considered when picking between these different term lengths.
Now there are also different payment structures.
The first is no upfront.
And with this method, you agree to a one or three year term and simply pay a reduced per second fee.
And you pay this whether the instance is running or not.
It's nice and clean.
It doesn't impact cash flow.
But it also offers the least discount of the three options that I'm showing in this lesson.
You've also got the ability to pay all upfront.
And this means the whole cost of the one or three year term in advance when you purchase the reservation.
If you do this, there's no per second fee for the instance.
And this method offers the greatest discount.
So if you decide to purchase a three year reservation and do so using all upfront, this offers the greatest discount of all the reserved options within AWS.
Now there's also a middle ground, partial upfront, where you pay a smaller lump sum in advance in exchange for a lower per second cost.
So this is a good middle ground.
You have lower per second costs than no upfront and less upfront costs than all upfront.
So it's an excellent compromise if you want good cost reductions, but don't want to commit for cash flow reasons to paying for everything in advance.
So that's reserved.
You purchase a commitment to use AWS resources.
And reserved instances are ideal for components of your infrastructure, which have known usage, require consistent access to compute, and you require this on a long-term basis.
So you'd use this for any components of your infrastructure that you require the lowest cost, require consistent usage, and can't tolerate any interruption.
Now there are some other elements to reservations that I want to talk about, such as capacity reservations, conversion, and scheduled reservations, but I'll be doing that in a dedicated lesson.
At this point, let's move on.
Next I want to talk about dedicated hosts.
As the name suggests, a dedicated host is an EC2 host, which is allocated to you in its entirety.
So you pay for the host itself, which is designed for a specific family of instances, for example, A, C, R, and so on.
Now these hosts come with all of the resources that you'd expect from a physical machine, a physical EC2 host.
So a number of cores and CPUs, as well as memory, local storage, and network connectivity.
Now the key thing here is that you pay for the host.
Any instances which run on that host have no per second charge, and you can launch various different sizes of instances on the host, consuming all the way up to the complete resource capacity of that host.
Now logically, you need to manage this capacity.
If the dedicated hosts run out of capacity, then you can't launch any additional instances onto those hosts.
Now the reason that you would use dedicated hosts normally is that you might have software which is licensed based on sockets or cores in a physical machine.
This type of licensing does still exist.
While it seems crazy that doesn't change the fact that for certain applications, you're licensing it based on the amount of resources in a physical machine, not the resources that are allocated to a virtual machine or an instance within AWS.
Dedicated hosts also have a feature called host affinity, linking instances to certain EC2 hosts.
So if you stop and start the instance, it remains on the same host, and this too can have licensing implications.
Now you also gain another benefit that only your instances will ever run on dedicated hosts, but normally in real situations and in the exam, the reason to use this purchase option is for the socket and core licensing requirements.
Now one last thing that I want to talk about in this lesson before we move on, and that's dedicated instances, and I want to use this as an opportunity to summarize a couple of aspects of these different purchase options.
Visually, this is how EC2 hosts look using the various models.
On the left is the default or shared model, which on demand and reserved users.
So EC2 hosts are shared, so you will have some instances, other customers of AWS will have some instances, and in addition, there's also likely to be some unused capacity.
So with this model, there's no real exposure to EC2 hosts, your build per second, that's obviously depending on whether you use reservations, but there's no capacity to manage, but you share EC2 hosts with other customers.
In 99% of cases, that's okay, but there are some situations when it's not.
In situations when it's not, you can also choose to use dedicated hosts.
Now I've just talked about them, you pay for the host, and so only your instances run on these hosts.
Any unused capacity is wasted, and you have to manage the capacity, both in terms of the underutilization, but you also need to be aware that because they're physical hosts, they have a physical capacity, and so there's going to be a maximum number of instances that you can launch on these dedicated hosts.
So keep that in mind because you have to monitor both resource underconsumption, as well as the maximum capacity of the EC2 hosts.
Now the other option which I haven't discussed yet is dedicated instances, and this is a middle ground.
With dedicated instances, your instances run on an EC2 host with other instances of yours, and no other customers use the same hardware.
Now crucially, you don't pay for the host, nor do you share the host, you have the host all to yourself.
So you launch instances, they're allocated to a host, and AWS commit to not using any other instances from other customers on that same hardware.
Now there are some extra fees that you need to be aware of with this purchase option.
First you pay a one-off hourly fee for any regions where you're using dedicated instances, and this is regardless of how many you're utilizing, and then there's a fee for the dedicated instances themselves.
Now dedicated instances are common in sectors of the industry where you have really strict requirements, which mean that you can't share infrastructure.
So you can use this method to benefit from the features of EC2, safe in the knowledge that you won't be sharing physical underlying hardware with other AWS customers.
So the default or shared model is used for on-demand, for spot, and for reserved instances.
Dedicated hosts offer a method where you can pay for the entire host, so you pay a charge for the host, you don't incur any charges for any instances which are launched onto that host, but you have to manage capacity.
Now dedicated hosts are generally used when you have strict licensing requirements, and then a middle ground is dedicated instances where you have requirements not to share hardware, but you don't want to manage the host itself.
So with dedicated instances, you can pay a cost premium and always guarantee that you will not share underlying hardware with any other AWS customers.
So these are all of the different purchase options that you'll need to be aware of for the exam.
For the exam, you should focus on on-demand, reserved, and spot.
So make sure that you've watched the earlier parts of this lesson really carefully, and you understand the different types of use cases where you would use spot, on-demand, and reserved.
With that being said, that's all of the theory that I wanted to cover in this lesson.
So go ahead, finish the lesson, and then when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover EC2 purchase options.
Now EC2 purchase options are often referred to as launch types, but the official way to refer to them from AWS is purchase options.
And so to be consistent, I think it's worth focusing on that name.
So EC2 purchase options.
So let's step through all of the main types with a focus on the situations where you would and wouldn't use each of them.
So let's jump in and get started.
The first purchase option that I want to talk about is the default, which is on demand.
And on demand is simple to explain because it's entirely unremarkable in every way.
It's the default because it's the average of anything with no specific pros or cons.
Now the way that it works, let's start with two EC2 hosts.
Obviously AWS has more, but it's easy to diagram with just the two.
Now instances of different sizes when launched using on demand will run on these EC2 hosts.
And different AWS customers, they're all mixed up on the shared pool of EC2 hosts.
So even though instances are isolated and protected, different AWS customers launch instances which share the same pool of underlying hardware.
Now this means that AWS can efficiently allocate resources, which is why the starting price for on demand in EC2 is so reasonable.
Now in terms of the price on demand uses per second billing, and this happens while instances are running, so you're paying for the resources that you consume.
If you shut an instance down logically, you don't pay for those resources.
Now other associated services such as storage which do consume resources, regardless of if the instance is running or in a shutdown state, do charge constantly while those resources are being consumed.
So remember this, while instances only charge while in the running state, other associated resources may charge regardless.
So this is how on demand works, but what types of situations should it be used for?
Well, it's the default purchase option and so you should always start your evaluation process by considering on demand as your default.
For all projects, assume on demand and move to something else if you can justify that alternative purchase option.
With on demand, there are no interruptions.
You launch an instance, you pay a per second charge, and barring any failures, the instance runs until you decide otherwise.
You don't receive any capacity reservations with on demand.
If AWS has a major failure and capacity is limited, the reserved purchase option receives highest provisioning priority on whatever capacity remains.
And so if something is critical to your business, then you should consider an alternative rather than using on demand.
So on demand does not give you any priority access to remaining capacity if there are any major failures.
Now on demand offers predictable pricing.
It's defined upfront.
You pay a constant price, but there are no specific discounts.
This consistent pricing applies to the duration that you use instances.
So on demand is suitable for short term workloads.
Anything which you just need to provision, perform a workload and then terminate is ideal for on demand.
If you're unsure about the duration or the type of workload, then again on demand is ideal.
And then lastly, if you have short term or unknown workloads, which definitely can't tolerate any interruption, then on demand is the perfect purchase option.
Next, let's talk about spot pricing.
And spot is the cheapest way to get access to EC2 capacity.
Let's look at how this works visually.
Let's start with the same two EC2 hosts.
On the left, we have A and on the right B.
Then on these EC2 hosts, we're currently running four EC2 instances, two per host.
And let's assume for this example that all of these four instances are using the on demand purchase option.
So right now with what you see on screen, the hosts are wasting capacity.
Enough capacity for four additional instances on each host is being wasted.
Spot pricing is AWS selling that spare capacity at a discounted rate.
The way that it works is that within each region for each type of instance, there is a given amount of free capacity on EC2 hosts at any time.
AWS tracks this and it publishes a price for how much it costs to use that capacity.
And this price is the spot price.
Now, you can offer to pay more than the spot price, but this is a maximum.
You'll only ever pay the current spot price for the type of instance in the specific region where you provision services.
So let's say that there are two different customers who want to provision four instances each.
The first customer sets a maximum price of four gold coins.
And the other customer sets a maximum price of two gold coins.
Now, obviously, AWS don't charge in gold coins and there are more than two EC2 hosts, but it's just easier to represent it in this way.
Now, because the current spot price set by AWS is only two gold coins, then both customers are only paying two gold coins a second for their instances.
Even though customer one has offered to pay more, this is their maximum and they only ever pay the current spot price.
So let's say now that the free capacity is getting a little bit on the low side.
AWS are getting nervous.
They know that they need to free up capacity for the normal on demand instances, which they know are about to launch.
And so they up the spot price to four gold coins.
Now, customer one is fine because they've set a maximum price of four coins.
And so now they start paying four coins because that's what the current spot price is.
Customer two, they've set their maximum price at two coins.
And so their instances are terminated.
If the spot price goes above your maximum price, then any spot instances which you have are terminated.
That's the critical part to understand because spot instances should not be viewed as reliable.
At this point in our example, maybe another customer decides to launch four on demand instances.
AWS sell that capacity at the normal on demand rates, which are higher and no capacity is wasted.
Spot pricing offers up to a 90% reduction versus the price of on demand.
And there are some significant trade offs that you need to be aware of.
You should never use the spot purchase option for workloads which can't tolerate interruptions.
No matter how well you manage your maximum spot price, there are going to be periods when instances are terminated.
If you run workloads where that's a problem, don't use spot.
This means that workloads such as domain controllers, mail servers, traditional websites, or even flight control systems are all bad fits for spot instances.
The types of scenarios which are good fits for using spot instances are things which are not time critical.
Since the spot price changes throughout each day and throughout days of the week, if you're able to process workloads around this, then you can take advantage of the maximum cost benefits for using spot.
Anything which can tolerate interruption and just rerun is ideal for spot instances.
So if you have highly parallel workloads which can be broken into hundreds or thousands of pieces, maybe scientific analysis, and if any parts which fail can be rerun, then spot is ideal.
Anything which has a bursty capacity need, maybe media processing, image processing, any cost sensitive workloads which wouldn't be economical to do using normal on-demand instances, assuming they can tolerate interruption, these are ideal for spot.
Anything which is stateless where the state of the user session is not stored on the instances themselves, meaning they can handle disruption, again, ideal for using spot.
Don't use spot for anything that's long-term, anything that requires consistent, reliable compute, any business critical things or things which cannot tolerate disruption.
For those type of workloads, you should not use spot.
It's an anti-pattern.
OK, so this is the end of part one of this lesson.
It was getting a little bit on the long side and I wanted to give you the opportunity to take a small break, maybe stretch your legs or make a coffee.
Now part two will continue immediately from this point, so go ahead, complete this video, and when you're ready, I look forward to you joining me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
And in this lesson, I want to go into a little bit more depth on Amazon machine images or AMIs.
Many people in the AWS community would have you think that there's some kind of argument or disagreement over how to pronounce AMI with some people pronouncing it a slightly different way that I'm not going to repeat here.
In my view, there's no argument.
Those people are just misguided.
AMIs are the images of EC2.
They're one way that you can create a template of an instance configuration and then use that template to create many instances from that configuration.
AMIs are actually used when you launch EC2 instances.
You're launching those instances using AWS provided AMIs, but you can create your own.
And that's what I want to focus on in this lesson.
So what that means, what happens when you create your own AMIs and how to do it effectively.
So just a few key points before I talk about the life cycles and the flow around AMIs.
AMIs can be used to launch EC2 instances.
I've mentioned that a second ago.
They're actually used by the console UI.
When you launch an EC2 instance, when you select to use Amazon at Linux 2, you're actually using an Amazon Linux 2 AMI to launch that instance.
Now the AMIs that you usually use to launch instances with, they can be AWS or community provided.
So certain vendors that make their own distribution of Linux, they produce community AMIs that can be used to launch that distribution of Linux inside EC2.
So companies such as Red Hat, distributions such as CentOS and Ubuntu, they're available inside AWS.
You can use those AMIs to launch EC2 instances with those distributions.
You can also launch instances from marketplace provided AMIs.
And these can include commercial software.
So you're able to launch an instance and have that instance cost the normal hourly rate plus an extra amount for that commercial software.
And you can do that by going to the marketplace, picking a commercial AMI from the marketplace, launching it on a specific instance.
And with that architecture, is generally the instance cost as well as an extra cost for the AMI, which includes the licenses to that commercial software.
Now AMIs are regional.
So there are different AMIs for the same thing in different regions.
For Amazon Linux 2, there will be an AMI in US East 1.
There will be an AMI in US West 1.
There will be another AMI in the Sydney region.
Each individual region has its own set of AMIs and each AMI has a unique ID with this format on screen now.
So AMI hyphen and then a random set of numbers and letters.
And an AMI can only be used in the region that it's in.
So there will be different AMI IDs for the same distribution of an operating system in each individual region.
And AMI also controls permissions.
So by default, an AMI is set so that only your account can use it.
So one of the permissions models is only your account.
You can set an AMI to be public so that everybody can access it, or you can add specific AWS accounts onto that AMI.
So those are the three options you have for permissions.
Now the flow that you've experienced so far is to take an AMI and use it to create an instance.
But you can also do the reverse.
You can create an AMI from an existing EC2 instance to capture the current configuration of that instance.
So creating a template of an existing instance that can be used to make more instances.
Now I want you to think about the life cycle of an AMI as having four phases.
We've got launch, configure, create image, and launch again.
And that second launch that is intentional, I'll explain more as we step through this life cycle model.
A large number of people who interact with AWS only ever experience the first phase.
And this is where you use an AMI to launch an EC2 instance.
And we've done that together a few times so far in the course.
Now in one of the previous demo lessons in this section of the course, you started to experience a little more of EBS and saw that volumes which were attached to EC2 instances are actually separate logical devices.
EBS volumes are attached to EC2 instances using block device IDs.
The boot volume is usually /dev/xvda.
And as we saw in a previous demo lesson, an extra volume was called /dev/xvdf.
So these are device IDs and device IDs are how EBS volumes are presented to an instance.
Now if this is all you ever use AMIs for, that's fine.
There are more ways to provision things inside AWS than creating your own AMIs.
So it's 100% fine if you don't use custom AMIs much beyond using them to launch things.
But if you choose to, you can take the instance that you provisioned during the launch phase, so that's the instance and its attached EBS volumes.
And then maybe you can decide to apply some customisation to bring your instance into a state where it's perfectly set up for your organisation.
This might be an OS with certain applications and services installed or it might be an instance with a certain set of volumes attached of a certain size, or it might be an instance with a full application suite installed and configured to your exact bespoke business requirements ready to use.
Now an instance that's in this heavily customised state, you can take this and actually create your own AMI using that configuration.
So a customised configuration of an EC2 instance, architecturally, will just be the instance and any volumes that are attached to that instance, but you can take that configuration and you can use it to create an AMI.
Now this AMI will contain a few things and it's the exact details of those few things that are important.
We've already mentioned that the AMI contains permissions, so who can use the AMI?
Is it public?
Is it private just to your account or do you give access to that AMI to other AWS accounts?
That's stored inside the AMI.
Think of an AMI as a container.
It's just a logical container which has associated information.
It's an AMI, it's got an AMI ID and it's got the permissions restricting who can use it.
But what really matters is that when you create an AMI, for any EBS volumes which are attached to that EC2 instance, we have EBS snapshots created from those volumes.
And remember, EBS snapshots are incremental, but the first one that occurs is a full copy of all the data that's used on that EBS volume.
So when you make an AMI, the first thing that happens is the snapshots are taken and those snapshots are actually referenced inside the AMI using what's known as a block device mapping.
Now what the block device mapping is is essentially just a table of data.
It links the snapshot IDs that you've just created when making that AMI and it has for each one of those snapshots a device ID that the original volumes had on the EC2 instance.
So visually with what's on screen now, the snapshot that's on the right side of the screen that matches the boot volume of the instance.
So the block device mapping will contain the ID of the right snapshot and it will contain the block device of the original volume.
So /dev/xvda.
The left snapshot that's on screen now that references the data volume which is /dev/xvdf.
So the block device mapping in this case will have two lines.
It will reference each of the snapshots and then the device ID that the original volumes had.
Now what this does is it means that when this AMI is used to create a new instance, this instance will have the same EBS volume configuration as the original.
When you launch an instance using an AMI, what actually happens is the snapshots are used to create new EBS volumes in the availability zone that you're launching that instance into and those volumes are attached to that new instance using the same device IDs that are contained in that block device mapping.
So AMIs are a regional construct.
So you can take an AMI from an instance that's in availability zone A and that AMI as an object is stored in the region.
The snapshots, remember, are stored on S3 so they're already regional and you can use that AMI to deploy instances back into the same AZ as the source instance or into other availability zones in that region.
So just make sure that you understand this architecture.
In the following demo lesson, you're going to get a chance to actually do this to create an AMI but it's a lot easier to understand exactly how this works if you've got a picture of the architecture in your mind.
So an AMI itself does not contain any real data volume.
An AMI is a container.
It references snapshots that are created from the original EBS volumes together with the original device IDs.
And so you can take that AMI and use it to provision brand new instances with exactly the same data and exactly the same configuration of volumes.
Now, before we finish this lesson and move on to the demo, I do have some exam power ups that I want to step through.
AMIs do feature on the exam and the architecture of AMIs is especially important for the Solutions Architect Associate exam.
So I'm just going to step through a few really key points.
AMIs are in one region so you create an AMI in a particular region.
It can only be used in that region but it can be used to deploy instances into all of the availability zones in that region.
There's a term that you might see in documentation or hear other AWS professionals talk about which is AMI baking.
And AMI baking is the concept of taking an EC2 instance, installing all of the software, doing all the configuration changes and then baking all of that into an AMI.
If you imagine what we did in the last lesson where we installed WordPress manually on an EC2 instance, imagine how easy that would have been if we'd have performed that installation once so installed and configured all the software and then created an AMI of that configuration that we could then use to deploy tens of instances or hundreds of instances all pre-configured with WordPress.
Well, that is a scenario that you can use AMIs for.
So create an AMI with a custom configuration of an EC2 instance for a particular bespoke requirement in your business and then use it to stamp out lots of EC2 instances.
And that process is known as AMI baking.
You're baking the configuration of that instance into an AMI.
Another important thing to understand is that an AMI cannot be edited.
If you want to adjust the configuration of an existing AMI, then you should take the AMI, use it to launch an instance, update the configuration and then create a brand new AMI.
You cannot update an existing AMI.
That's critical to understand.
Another important thing to understand is that AMIs can be copied between AWS regions.
Now, remember the default permissions on an AMI is that it's accessible only in your account.
You can change it by adding additional accounts explicitly.
So you can add different accounts in your organization.
You could add partner accounts or you can make the AMI completely public.
Those are the three options.
It can be private, it can be public, or you can explicitly grant access to individual AWS accounts.
Now in terms of billing for AMIs, an AMI does contain EBS snapshots.
And so you are going to be billed for the capacity used by those snapshots.
Remember though that snapshots only store the data used in EBS volumes.
So even if you do have instances with fairly large volume allocations, if those volumes only use a small percentage of the data, then the snapshots will be much smaller than the size allocated for the EBS volumes.
But for an AMI, you do need to be aware that it does have a cost and those costs are the storage capacity used by the EBS snapshots that that AMI references.
Now at this point, I think that's enough theory.
So that's what we're going to finish off this lesson.
In the next lesson, which is a demo lesson, you're going to get some practical experience of launching an instance, performing some custom configuration, and then creating your own AMI and using that AMI to create new instances.
So I'm hoping that by having this theory lesson, which introduces all of the architecture and important details, it will help you understand the demo lesson where you'll implement this in your own environment.
And that demo lesson will help all of this theory and architecture stick because it will be important to remember for the exam.
At this point though, go ahead, complete this video, and when you're ready, you can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover some networking theory related to EC2 instances.
I want to talk about network interfaces, instance IPs and instance DNS.
EC2 is a feature rich product and there's a lot of nuance in the way that you can connect to it and interact with it.
So it's important that you understand exactly how interfaces, IPs and DNS work.
So let's get started and take a look.
Architecturally this is how EC2 looks.
We have an EC2 instance and it always starts off with one network interface.
And this interface is called an ENI, an Elastic Network Interface.
And every EC2 instance has at least one which is the primary interface or primary ENI.
Now optionally you can attach one or more secondary elastic network interfaces which can be in separate subnets but everything needs to be within the same availability zone.
Remember EC2 is isolated in one availability zone so this is important.
And instance can have different network interfaces in separate subnets but all of those subnets need to be in the same availability zone.
In this case availability zone A.
Now these network interfaces have a number of attributes or things which are attached to them.
This is important because when you're looking at an EC2 instance using the console UI, these are often presented as being attached to the instance itself.
So you might see things like IP addresses or DNS names and they appear to be attached to the instance.
But when you're interacting with the instance from a networking perspective, you're often seeing elements of the primary network interface.
So when you launch an instance with security groups for instance, those security groups are actually on the network interface not the instance.
But let me expand on this a little bit and just highlight some of the things that are actually attached to the network interfaces.
First, network interfaces have a MAC address and this is the hardware address of the interface and it's visible inside the operating system.
So it can be used for things like software licensing.
Each interface also has a primary IP version for private address that's from the range of the subnet that the interface is created in.
So when you select a VPC and a subnet for an EC2 instance, what you're actually doing is picking the VPC and the subnet for the primary network interface.
Now you can have zero or more secondary private IP addresses also on the interface.
You can have zero or one public IP addresses associated with the interface itself.
And you can also have one elastic IP address per private IP version for address.
Now elastic IP addresses are public IP version for addresses and these are different than normal public IP version for addresses where it's one per interface.
With elastic IP addresses, you can have one public elastic IP address per private IP address on this interface.
You can have zero or more IP version six addresses per interface.
And remember, these are by default publicly routable.
So with IP version six, there's no definition of public or private addresses.
They're all public addresses.
And then you can have security groups and security groups.
These are applied to network interfaces.
So a security group that's applied on a particular interface will impact all IP addresses on that interface.
That's really important architecturally because if you're ever in a situation where you need different IP addresses for an instance impacted by different security groups, then you need to create multiple interfaces with those IP addresses separated and then apply different security groups to each of those different interfaces.
Security groups are attached to interfaces.
And then finally per interface, you can also enable or disable the source and destination check.
This means that if traffic is on the interface, it's going to be discarded if it's not from one of the IP addresses on the interface as a source, or destined to one of the IP addresses on the interface as a destination.
So if this is enabled, traffic is discarded if it doesn't match one of those conditions.
So if you recall when I talked about NAT instances, this is the setting that you need to disable for an EC2 instance to work as a NAT instance.
So this check needs to be switched off.
Now, depending on the type of EC2 instance that you provision, you can have additional secondary interfaces.
The exact number depends on the instance, but at a high level, the capabilities of the secondary interfaces are the same as the primary, except that you can detach secondary interfaces and move them to other EC2 instances.
So that is a critical difference, and that does bring with it some additional capabilities.
So let's explore some of these ANI attributes and attachments in a little bit more detail.
Let's assume for this example that this instance receives a primary IP version 4 private IP address of 10.16.0.10.
This is static, and it doesn't change for the lifetime of the instance.
Now, the instance is also given a DNS name that's associated with this private address.
It has a logical format, so it starts off with IP, so the letters I and P, and then a hyphen, and then it's got the private IP address separated by hyphens rather than periods, and then .ec2.internal.
And this IP is only resolvable inside the VPC, and it always points at this private IP address, so this internal IP address.
So you can use this private DNS name for internal communications only inside the VPC.
Now, assuming that the instance is either manually set to receive a public IP version 4 address, or that it's launched with the default settings into a subnet, which is configured to automatically allocate IP version 4 public addresses, then it will get one.
So it will get an IP version 4 public IP.
But this is a dynamic IP.
It's not fixed.
If you stop and start an instance, its public IP address will change.
Specifically, when you stop an instance, the public IP version 4 address is deallocated, and when you start an instance again, it is allocated a brand new public IP version 4 address, and it will be a different one.
Now, if you just restart the instance, so not stop and then start, but just perform a restart, that won't change the IP address, because it's only stopping it and then starting it again that will cause that change.
But in addition, anything that makes that instance change between EC2 hosts will also cause an IP change.
For this public IP version 4 address, EC2 instances are also allocated a public DNS name, and generally it's very similar to this format.
So it starts with EC2 and then a hyphen, and then the IP address with hyphens rather than dots, and then something similar to this.
So compute hyphen 1.amazonaws.com.
Now, this might differ slightly, but generally the public DNS follows this format.
Now, what's special about this public DNS name is that inside the VPC, it will resolve to the primary private IP version 4 address of the instance, so the primary network interface.
Remember how VPC works?
This public IP version 4 IP address is not directly attached to the instance or any of the interfaces.
It's associated with it and the internet gateway handles that translation.
And so in order to allow instances in a VPC to use the same DNS name and to make sure they're always using the private addresses inside the VPC, it always resolves to the private address.
Outside of the VPC, the DNS will resolve to the public IP version 4 address of that instance.
So that's important to know and that will come up later on in the course when we're doing hybrid network design.
It allows you to specify one single DNS name for an instance and have that traffic resolve to an internal address inside AWS and an external IP outside AWS.
So it simplifies the discoverability of your instances.
Now, elastic IP addresses are something that I want to introduce now and then in the next demo lesson, you'll get to experiment with them.
Elastic IP addresses are something that's allocated to your AWS account.
When you allocate an elastic IP, you can associate the elastic IP with a private IP, either on the primary interface or a secondary interface.
If you do associate it with the primary interface, then as soon as you do that, the normal, so the non elastic IP version 4 address that the instance had, that's removed and the elastic IP becomes the instances new public IP version 4 address.
So if you assign an elastic IP to an instance under most circumstances, the instance will lose its non elastic public address.
If you remove that elastic IP address, it will gain a new public IP version 4 address.
That's a question that comes up in the exam all the time.
If an instance has a non elastic public IP and you assign an elastic IP and then remove it, is there any way to get that original IP back?
And the answer is no, there's not.
When you assign an elastic IP, it loses that original dynamic public IP version 4 address.
If you remove the elastic IP, it gets a new one, but it's completely different.
Now, I know that this is a lot of theory, but this is really important from a networking perspective.
So you need to try and become really clear with what I've talked about on this slide.
So instances have one or more network interfaces, a primary and optionally secondaries.
And then for each network interface, just make sure that you're certain about what IP addressing it has.
So a primary private IP address, secondary private IP addresses, optionally one public IP version 4 address, and then optionally one or more elastic IP addresses.
So become familiar with exactly what these mean.
And again, in the next demo lesson, you'll get chance to experiment and understand exactly how they work.
But before we move on, I want to talk about some exam power ups.
This is an important area at AWS.
And so there are a number of hints and tips that I can give you for the exam.
So let's take a look at some of those.
Now, my first tip is to talk about secondary elastic network interfaces and then MAC addresses.
So this is a really useful technique.
A lot of legacy software is actually licensed using a MAC address.
A MAC address is viewed as something static that doesn't change.
But because EC2 is a virtualized environment, then we can swap and change elastic network interfaces.
And so if you provision a secondary elastic network interface on an instance and use that secondary network interface as MAC address for licensing, well, that means that you can detach that secondary interface and attach it to a new instance and move that licensing between EC2 instances.
So that's really powerful.
Something else to keep in mind is that multiple interfaces can be used for multi-homed systems.
So an instance with an ANI in two different subnets, you might use one for management and one for data.
It gives you some flexibility.
And why you might use multiple interfaces rather than just multiple IPs is that security groups are attached to interfaces.
So as I mentioned earlier in this lesson, if you need different rules, so different security groups for different IPs or different rules for different types of access based on IPs your instance has, then you need multiple elastic network interfaces with different security groups on each.
When you interact with an instance and apply security groups, if you're doing it on an instance level, you generally interact with the primary elastic network interface.
In many ways, you can almost think of the primary interface as the instance in a way, but they are separate things.
When you get down to the deep technical level, they are separate things and it's the interfaces that generally have a lot of the configuration on them.
One really important point about EC2 IP addressing that I keep stressing for the exam, I even covered this in a previous lesson of the course, is that the operating system never sees the IP version for public address.
This is provided by a process called NAT, which is performed by the Internet Gateway.
And so as far as the operating system is concerned, you always configure the private IP version for address on the interface always.
Inside the OS, it has no visibility on the networking configuration of the public IP address.
You will never be in a situation where you need to configure Windows or Linux with the IP version for public address.
Now IP version six is different because they're all public, but for the exam, just remember this really, really remember it.
You can never configure a network interface inside an operating system with a public IP version for address inside AWS.
The normal IP version for public IP address that EC2 instances are provided with is dynamic.
If you stop an instance, that IP is deallocated.
If you start an instance again, a new IP version for public IP address is allocated in its place.
If you start an instance, it's fine, but if you do stop and start, or if there's a forced migration of an instance between hosts, the normal IP version for public IP address will change.
To avoid this, you need to allocate and assign an elastic IP address.
Finally, and this will help you later on in the course and for the exam, the public DNS, which is given to the instance for the public IP version for address.
You can resolve to the primary private IP version for address from within the VPC, and this is done so that if you've got instance to instance communication using this address inside the VPC, it never leaves the VPC.
So it doesn't have to go out to the Internet Gateway and then back again when you're communicating between two EC2 instances using this public DNS.
For the rest of the course, this public DNS resolves to the public IP version for IP address.
Remember this for the exam and remember it later in the course when I'm talking about technologies such as VPC peering, because you'll need to know exactly how this works.
So inside the VPC, the public DNS resolves to the private IP.
Outside the VPC, it will resolve to the public IP address.
Now, I know that has been a lot of theory.
Don't worry, I promise you, as we continue moving through the course, all of these really theoretical concepts that I'm talking about in these dedicated theory lessons, they will start to click when you start using this technology.
We've already experienced this a little bit when we started provisioning EC2 instances or using NAT gateways.
You've seen how some of the theory is applied by AWS products and services.
So don't worry, it will click as we move through the course.
It's my job to make sure that information sticks, but I do need to teach you some raw theory occasionally as we go through the course.
And this has been one of those lessons.
Do your best to remember, but it will start sticking when we start getting practical exposure.
But at this point, that's everything I wanted to cover.
So go ahead, mark this video as complete and when you're ready, you can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I'm going to be discussing EBS snapshots.
So snapshots provide a few really useful features for a solutions architect.
First, they're an efficient way to back up EBS volumes to S3.
And by doing this, you protect the data on those volumes against availability zone issues or local storage system failure in that availability zone.
And they can also be used to migrate the data that's on EBS volumes between availability zones using S3 as an intermediary.
So let's step through the architecture first through this lesson, and then in the next lesson, which will be a demo, we'll get you into the AWS console for some practical experience.
Snapshots are essentially backups of EBS volumes which are stored on S3.
EBS volumes are availability zone resilient, which means that they're vulnerable to any issues which impact an entire availability zone.
Because snapshots are stored on S3, the data that snapshots store become region resilient.
And so we're improving the resiliency level of our EBS volumes by taking a snapshot and storing it into S3.
Now snapshots are incremental in nature, and that means a few very important things.
It means that the first snapshot to be taken of a volume is a full copy of all of the data on that volume.
Now I'm stressing the word data because a snapshot just copies the data used.
So if you use 10 GB of a 40 GB volume, then that initial snapshot is 10 GB, not the full 40 GB.
The first snapshot, because it's a full one, can take some time depending on the size of the data.
It's copying all of the data from a volume onto S3.
Now your EBS performance won't be impacted during this initial snapshot, but it just takes time to copy in the background.
Future snapshots are fully incremental.
They only store the difference between the previous snapshot and the state of the volume when the snapshot is taken.
And because of that they consume much less space and they're also significantly quicker to perform.
Now you might be concerned at this point hearing the word incremental.
If you've got any existing backup system or backup software experience, it was always a risk that if you lost an incremental backup, then no further backups between that point and when you next took the full backup would work.
So there was a massive risk of losing an incremental backup.
You don't have to worry about that with EBS.
It's smart enough so that if you do delete an incremental snapshot, it makes sure that the data is moved so that all of the snapshots after that point still function.
So each snapshot, even though it is incremental, can be thought of as self-sufficient.
Now when you create an EBS volume, you have a few choices.
You can create a blank volume or you can create a volume that's based on a snapshot.
So snapshots offer a great way to clone a volume.
Because S3 is a regional service, the volume you create from a snapshot can be in a different availability zone from the original, which means snapshots can be used to move EBS volumes between availability zones.
But also snapshots can be copied between AWS regions so you can use snapshots for global DR processors or as a migration tool to migrate the data on volumes between regions.
Snapshots are really flexible.
Visually, this is how snapshot architecture looks.
So here we've got two AWS regions, US East 1 and AP Southeast 2.
We have a volume in availability zone A in US East 1, and that's connected to an EC2 instance in the same availability zone.
Now snapshots can be taken of this volume and stored in S3.
And the first snapshot is a full copy, so it stores all of the data that's used on the source volume.
The second one is an incremental, so this only stores the changes since the last snapshot.
So at the point that you create the second snapshot, only the changes between the original snapshot and now stored in this incremental.
And these are linked, so the incremental references the initial snapshot for any data that isn't changed.
Now the snapshot can be used to create a volume in the same availability zone.
It can be used to create a volume in another availability zone in the same region, and that volume could then be attached to another EC2 instance.
Or the snapshot could be copied to another AWS region and used to create another volume in that region.
So that's the architecture.
That's how snapshots work.
There's nothing overly complex about it, but I did want to cover a few final important points before we finish up.
As a solutions architect, there are some nuances of snapshot and volume performance that you need to be aware of.
These can impact projects that you design and deploy significantly, and this does come up in the exam.
Now first, when you create a new EBS volume without using a snapshot, the performance is available immediately.
There's no need to do any form of initialization process.
But if you restore a volume from a snapshot, it does the restore lazily.
What this means is that if you restore a volume right now, then starting right now, over time it will transfer the data from the snapshot on S3 to the new volume in the background.
And this process takes some time.
If you attempt to read data which hasn't been restored yet, it will immediately pull it from S3, but that achieves lower levels of performance than reading from EBS directly.
So you have a number of choices.
You can force a read of every block of the volume, and this is done in the operating system using tools such as DD on Linux.
And this reads every block one by one on the new EBS volume, and it forces EBS to pull all the snapshot data from S3 into that volume.
And this is generally something that you would do immediately when you restore the volume before moving that volume into production usage.
It just ensures perfect performance as soon as your customers start using that data.
Now historically that was the only way to force this rapid initialization of the volume.
But now there's a feature called Fast Snapshot Restore or FSR.
This is an option that you can set on a snapshot which makes it instantly restore.
You can create 50 of these fast snapshot restores per region, and when you enable it on a snapshot, you pick the snapshot specifically and the availability zones that you want to be able to do instant restores to.
Each combination of that snapshot and an AZ is classed as one fast snapshot restore set, and you can have 50 of those per region.
So one snapshot configured to restore to four availability zones in a region represents four out of that 50 limit of FSRs per region.
So keep that in mind.
Now FSR actually costs extra.
Keep this in mind.
It can get expensive, especially if you have lots of different snapshots.
You can always achieve the same end result by forcing a read of every block manually using DD or another tool in the operating system.
But if you really don't want to go through the admin overhead, then you've got the option of using FSR.
Now I haven't talked about EBS volume encryption yet.
That's coming up in a lesson soon within this section.
But encryption also influences snapshots.
But don't worry, I'll be covering all of that end to end when I talk about volume encryption.
Now snapshots are billed using a gigabyte month metric.
So a 10 GB snapshot stored for one month represents 10 GB month.
A 20 GB snapshot stored for half a month represents the same 10 GB month.
And that's how you build.
There's a certain cost for every gigabyte month that you use for snapshot storage.
Now just to stress this, this is an awesome feature specifically from a cost aspect is that this is used data, not allocated data.
You might have a volume which is 40 GB in size, but if you only use 10 GB of that, then the first full snapshot is only 10 GB.
EBS doesn't charge for unused areas in volumes when performing snapshots.
You're charged for the full allocated size of an EBS volume, but that's because it's allocated.
For snapshots, you only build for the data that's used on the volumes.
And because snapshots are incremental, you can perform them really regularly.
Only the data that's changed is stored, so doing a snapshot every five minutes won't necessarily cost more than doing one per hour.
Now on the right, this is visually how snapshots look.
On the left, we have a 10 GB volume using 10 GB of data, so it's 100% consumed.
The first snapshot, logically, will consume 10 GB of space on S3 because it's a full snapshot and it consumes whatever data is used on the volume.
In the middle column, we're changing 4 GB of data out of that original 10 GB, so the bit in yellow at the bottom.
The next snap references the unchanged 6 GB of data and only stores the changed 4 GB.
So the second snap only builds for 4 GB of data, the changed data.
On the right, we've got 2 GB of data that's added to that volume, so the volume is now 12 GB.
The next snapshot references the original 6 GB of data, so that's not stored in this snapshot.
It also references the previous snapshots for GB of changed data, that's also not stored in this new snapshot.
The new snapshot simply adds the new 2 GB of data, so this snapshot only builds for 2 GB.
At each stage, a new snapshot is only storing data inside itself, which is new or changed, and it's referencing previous snapshots for anything which isn't changed.
That's why they're all incremental and that's why you only build each time you do a snapshot for the changed data.
Okay, that's enough theory for now, time for a demonstration.
So in the next demo lesson, we're going to experiment with EBS volumes and snapshots and just experience practically how we can interact with them.
It's going to be a simple demo, but I always find that by doing things, you retain the theory that you've learned and this has been a lot of theory.
So go ahead, complete this video and when you're ready, we can start the demo lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this lesson where I want to briefly cover some of the situations where you would choose to use EBS rather than Instant Store volumes.
And also I want to cover situations where Instant Store is more suitable than EBS and those situations where it depends.
Because there are always going to be situations where either or neither could work.
Now we've got a lot to cover so let's jump in and get started.
Now I want to apologize right at the start.
You know by now I hate lessons where I just talk about facts and figures, numbers and acronyms.
I almost always prefer diagrams, teaching, using real world architecture and implementations.
Sometimes though we just need to go through numbers and facts and this is one of those times.
I'm sorry but we have to do it.
So this lesson is going to be a lesson where I'm going to be covering some useful scenario points, some useful minimums and maximums and situations which will help you decide between using Instant Store volumes versus EBS.
And these are going to be useful both for the exam and real world usage.
Now first as a default rule if you need persistent storage then you should default to EBS or more specifically default away from Instant Store volumes.
So Instant Store volumes they're not persistent.
There are many reasons why data can be lost.
Hardware failure, instances rebooting, maintenance, anything which moves instances between hosts can impact Instant Store volumes.
And this is critical to understand for the exam and for the real world.
If you need resilient storage you should avoid Instant Store volumes and default to EBS.
Again if hardware fails, Instant Store volumes can be lost.
If instances move, if hosts fail, anything of this nature can cause loss of data on Instant Store volumes because they're just not resilient.
EBS provides hardware which is resilient within an availability zone and you also have the ability to snapshot volumes to S3 and so EBS is a much better product if you need resilient storage.
Next if you have storage which you need to be isolated from instance life cycles then use EBS.
So if you need a volume which you can attach to one instance, use it for a while, unattach it and then reattach it to something else then EBS is what you need.
These are the scenarios where it makes much more sense to use EBS.
For any of the things I've mentioned it's pretty clear cut.
Use EBS.
Or to put it another way avoid Instant Store volumes.
Now there are some scenarios where it's just not as clear cut and you need to be on the lookout for these within the exam.
Imagine that you need resilience but your application supports built-in replication.
Well then you can use lots of Instant Store volumes on lots of instances and that way you get the performance benefits of using Instant Store volumes but without the negative risk.
Another situation where it depends is if you need high performance.
Up to a point and I'll cover these different levels of performance soon both EBS and Instant Store volumes can provide high performance.
For super high performance though you will need to default to using Instant Store volumes and I'll be qualifying exactly what these performance levels are on the next screen.
Finally Instant Store volumes are included with the price of many EC2 instances and so it makes sense to utilize them.
If cost is a primary concern then you should look at using Instant Store volumes.
Now these are the high level scenarios and these key facts will serve you really well in the exam.
It will help you to pick between Instant Store volumes and EBS for most of the common exam scenarios.
But now I want to cover some more specific facts and numbers that you need to be aware of.
Now if you see questions in the exam which are focused purely on cost efficacy and where you think you need to use EBS.
Then you should default to ST1 or SC1 because they're cheaper.
They're mechanical storage and so they're going to be cheaper than using the SSD based EBS volumes.
Now if the question mentions throughput or streaming then you should default to ST1 unless the question mentions boot volumes which excludes both of them.
So you can't use either of the mechanical storage types so ST1 or SC1 to boot EC2 instances and that's a critical thing to remember for the exam.
Next I want to move on to some key performance levels.
So first we have GP2 and GP3 and both of those can deliver up to 16,000 IOPS per volume.
So with GP2 this is based on the size of the volume.
With GP3 you get 3000 IOPS by default and you can pay for additional performance.
But for either GP2 or GP3 the maximum possible performance per volume is 16,000 IOPS and you need to keep that in mind for any exam questions.
Now IO1 and IO2 can deliver up to 64,000 IOPS so if you need between 16,000 IOPS and 64,000 IOPS on a volume then you need to pick IO1 or IO2.
Now I've included the asterisks here because there is a new type of volume known as IO2 block express and this can deliver up to 256,000 IOPS per volume.
But of course you need to keep in mind that these high levels of performance will only be possible if you're using the larger instance types.
So these are specifically focused around the maximum performance that's possible using EBS volumes.
But you need to make sure that you pair this with a good sized EC2 instance which is capable of delivering those levels of performance.
Now one option that you do have and this comes up relatively frequently in the exam, you can take lots of individual EBS volumes and you can create a RAID 0 set from those EBS volumes and that RAID 0 set then gets up to the combined performance of all of the individual volumes but this is up to 260,000 IOPS because this is the maximum possible IOPS per instance.
So no matter how many volumes you combine together you always have to worry about the maximum performance possible on an EC2 instance.
And currently the highest performance levels that you can achieve using EC2 and EBS is 260,000 IOPS and to achieve that level you need to use a large size of instance and have enough EBS volumes to consume that entire capacity.
So you need to keep in mind the performance that each volume gives and then the maximum performance of the instance itself and there is a maximum currently of 260,000 IOPS.
So that's something to keep in mind.
Now if you need more than 260,000 IOPS and your application can tolerate storage which is not persistent then you can decide to use instance store volumes.
Instance store volumes are capable of delivering much high levels of performance and I've detailed that in the lesson specifically focused on instance store volumes.
You can gain access to millions of IOPS if you choose the correct instance type and then use the attached instance store volumes but you do always need to keep in mind that this storage is not persistent.
So you're trading the lack of persistence for much improved performance.
Now once again I don't like doing this but my suggestion is that you try your best to remember all of these figures.
I'm going to make sure that I include this slide as a learning aid on the course github repository.
So print it out, take a screenshot, include it in your electronic notes, whatever study method you use you need to remember all of these facts and figures from this entire lesson because if you remember them it will make answering performance related questions in the exam much easier.
Now again I don't like suggesting that students remember raw facts and figures, it's not normally conducive to effective learning but this is the one exception within AWS.
So try your best to remember all of these different performance levels and what technology you need to achieve each of the different levels.
Now at this point that's everything that I wanted to cover in this lesson, I hope it's been useful.
Go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Hello there, folks.
Thanks once again for joining.
Now that we've got a little bit of an understanding of what problem cloud is solving, let's actually go ahead and define it.
So what we'll talk about is technology on tap, a common phrase that you might have heard about when talking about cloud.
What is it and why would we say that?
Then what we're actually going to do is walk through the NIST definition of cloud.
So there are five key properties that the National Institute of Standards and Technology does use to determine whether or not something is cloud.
So we'll walk through that.
So we've got a good understanding of what cloud is and what cloud is not.
So first things first, technology on tap.
Why would we refer to cloud as technology on tap?
Well, let's have a think about the taps we do know about.
When you want access to water, if you're lucky enough to have access to a nice and easy supply of water, all you really need to do is turn on your tap and get access to as little or as much water as you want.
You can turn that on and off as you require.
Now, we know that that's easy for us.
All we have to worry about is the tap and paying the bill for the amount of water that we consume.
But what we don't really have to worry about is everything that goes in behind the scenes.
So the treatment of the water to bring it up to drinking standards, the actual storage of that treated water, and then the transportation of that through the piping network to actually get to our tap.
All of that is managed for us.
We don't need to really worry about what happens behind the scenes.
All we do is focus on that tap.
We turn it on if we want more.
We turn it off when we are finished.
We only pay for what we consume.
So you might be able to see where I'm going with this.
This is exactly what we are talking about with cloud.
With cloud, however, it's not water that we're getting access to, it is technology.
So if we want access to technology, we use the cloud.
We push some buttons, we click on an interface, we use whatever tool we require, and we get access to those servers, that storage, that database, whatever it might be that we require in the cloud.
Now again, behind the scenes, we don't have to worry about the data centers that host all of this technology, all of these services that we want access to.
We don't worry about the physical infrastructure, the hosting infrastructure, the storage, all the different bits and pieces that actually get that technology to us, we don't need to worry about.
And how does it get to us?
How is it available all across the globe?
Well, we don't need to worry about that connectivity and delivery as well.
All of this behind the scenes when we use cloud is managed for us.
All we have to worry about is turning on or off services as we require.
And this is why you can hear cloud being referred to as technology on tap, because it is very similar to the water utility service.
Utility service is another name you might hear cloud being referred to, because it's like water or electricity.
Cloud are like these utility services where you don't have to worry about all the infrastructure behind the scenes.
You just worry about the thing that you want access to.
And really importantly, you only have to pay for what you use.
You turn it on if you need it, you turn it on if you don't, you create things when you need them, delete them when you don't, and you only pay for those services when you have them, even though they are constantly available at your fingertips.
Now, compare this to the scenario we walked through earlier.
Traditionally, we would have to buy all of the infrastructure, have it sitting there idly, even if we weren't using it, we would still have had to pay for it, set it up, power it and keep it all running.
So this is a high level of what we are talking about with cloud.
Easy access to servers when you need them, turn them off when you don't, don't worry about all that infrastructure behind the scenes.
But that's a high level definition.
So let's now walk through what the NIST use as the key properties to define cloud.
One of the first properties you can use to understand whether something is or is not cloud is understanding whether or not it provides you on demand self service access, where you can easily go ahead and get that technology without even having to talk to humans.
So what do I really mean by that?
Well, let's say you're a cloud administrator, you want to go ahead and access some resources in the cloud.
Now, if you do want access to some services, some data, some storage and application, whatever it might be, while you're probably going to have some sort of admin interface that you can use, whether that's a command line tool or some sort of graphical user interface, you can easily use that to turn on any of the services that you need, web applications, data, storage, compute and much, much more.
And you don't have to go ahead, talk to another human, procure all of the infrastructure that runs behind the scenes.
You use your tool, it is self service, it is on demand, create it when you want it, delete it when you don't.
So that's on demand self service access and one of the key properties of the cloud.
Next, what I want to talk to you about is broad network access.
Now, this is where we're just saying, if something is cloud, it should be easy for you to access through standard capabilities.
So for example, if we are the cloud administrator, it's pretty common when you're working with technology to expect that you would have command line tools, web based tools and so on.
But even when we're not talking about cloud administrators and we're actually talking about the end users, maybe for example, accessing storage, it should be easy for them to do so through standard tools as well, such as a desktop application, a web browser or something similar.
Or maybe you've gone ahead and deployed a reporting solution in the cloud, like we spoke of in the previous lesson.
Well, you would commonly expect for that sort of solution that maybe there's also a mobile application to go and access all of that reporting data.
The key point here is that if you are using cloud, it is expected that all of the common standard sorts of accessibility options are available to you, public access, private access, desktop applications, mobile applications and so on.
So if that's what cloud is and how we access it, where actually is it?
That's a really important part of the definition of cloud.
And that's where we're referring to resource pooling, this idea that you don't really know exactly where the cloud is that you are going to access.
So let's say for example, you've got your Aussie Mart company.
If they want to deploy their solution to be available across the globe, well, it should be pretty easy for them to actually go ahead and do that.
Now, we don't know necessarily where that is.
We can get access to it.
We might say, I want my solution available in Australia East for example, or Europe or India or maybe central US for example.
All of these refer to general locations where we want to deploy our services.
When you use cloud, you are not going to go ahead and say, I want one server and I want it deployed to the data center at 123 data center street.
Okay, you don't know the physical address exactly or at least you shouldn't really have to.
All you need to know about is generally where you are going to go and deploy that.
Now, you will also see that for most cloud providers, you've got that global access in terms of all the different locations you can deploy to.
And really importantly, in terms of all of these pooled resources, understand that it's not just for you to use.
There will be other customers all across the globe who are using that as well.
So when you're using cloud, there are lots of resources.
They might be in lots of different physical locations and lots of different physical infrastructure and in use by lots of different customers.
And you don't really need to worry about that or know too much about it.
Another really important property of the cloud is something referred to as rapid elasticity.
Now elasticity is the idea that you can easily get access to more or less resources.
And when you work with cloud, you're actually going to commonly hear this being referred to as scaling out and in rather than just scaling up and down.
So what do I mean by that?
Well, let's say we've got our users that need to access our Aussie Mart store.
We might decide to use cloud to host our Aussie Mart web application.
And perhaps that's hosted on a server and a database.
Now, when that application gets really busy, for example, if we have lots of different users going to access it at the same time, we might want to scale out to meet demand.
That is to say, rather than having one server that hosts our web application, we might actually have three.
And if that demand for our application decreases, we might actually go ahead and decrease the underlying resources that power it as well.
What we are talking about here is scaling in and out by adding or decreasing the number of resources that host our application.
This is different from the traditional approach to scalability, where what we would normally do is just add CPU or add memory, for example.
We would increase the size of one individual resource that was hosting our solution.
So that's just elasticity at a high level and it's a really key property of cloud.
Now, we'll just say here that if you are worried about how that actually works behind the scenes in terms of how you host that application across duplicate resources, how you provide connectivity to that, that's all outside the scope of this beginners course, but it's definitely covered in other content as well.
So when you're using cloud, you get easy access to scale in and out and you should never feel like there are not enough resources to meet your demand.
To you, it should just feel like if you want a hundred servers, for example, then you can easily get a hundred servers.
All right, now the last property of cloud that I want to talk to you about is that of measuring service.
When we're talking about measuring service, what we're talking about is the idea that if you are using cloud to host your solutions, it should be really easy for you to go and say, I know what this is costing, I know where my resources are, how they are performing and whether there are any issues and I can control the types of resources and the configuration that I use that I'm going to deploy.
So for example, it should be easy for you to say, how much is it going to cost me for five gigabytes of storage?
What does my bill look like currently and what am I forecasted to be using over the remainder of the month?
Or maybe you want to say that certain services should not be allowed to be deployed across all regions.
Yes, cloud can be accessed across the globe, but maybe your organization only works in one part of a specific country and that's the only location you should be able to use.
These are the standard notions of measuring and controlling service and it's really common to all of the cloud providers.
All right, everybody.
So now you've got an understanding of what cloud is and how you can define it.
If you'd like to see more about this definition from the NIST, then be sure to check out the link that I've included for this lesson.
So thanks for joining me, folks.
I'll see you in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Hey there everybody, thanks for joining.
It's great to have you with me in this lesson where we're going to talk about why cloud matters.
Now to help answer that question, what I want to do firstly is talk to you about the traditional IT infrastructure.
How did we used to do things?
What sort of challenges and issues did we face?
And therefore we'll get a better understanding of what cloud is actually doing to help.
We can look at how things used to be and how things are now.
So what we're going to do throughout this lesson is walk through a little bit of a scenario with a fictitious company called Ozzymart.
So let's go ahead now, jump in and have a chat about the issues that they're currently facing.
Ozzymart is a fictitious company that works across the globe selling a range of different Australia related paraphernalia.
Maybe stuffed toys for kangaroos, koalas and that sort of thing.
Now they've currently got several different applications that they use that they provide access to for their users.
And currently the Ozzymart team do not use the cloud.
So when we have a look at the infrastructure hosting these applications, we'll learn that Ozzymart have a couple of servers, one server for each of the applications that they've got configured.
Now the Ozzymart IT team have had to have gone and set up these servers with windows, the applications and all the different data that they need for these applications to work.
And what it's also important to understand about the Ozzymart infrastructure is all of this is currently hosted on their on-premises customer managed infrastructure.
So yes, the Ozzymart team could have gone out and maybe used a data center provider.
But the key point here is that the Ozzymart IT team have had to set up servers, operating systems, applications and a range of other infrastructure to support all of this storage, networking, power, cooling.
Okay, these are the sorts of things that we have to manage traditionally before we were able to use cloud.
Now to help understand what sort of challenges that might introduce, let's walk through a scenario.
We're going to say that the Ozzymart CEO has gone and identified the need for reporting to be performed across these two applications.
And the CEO wants those reports to be up and ready by the end of this month.
Let's say that's only a week away.
So the CEO has instructed the finance manager and the finance manager has said, "Hey, awesome.
You know what?
I've found this great app out there on the internet called Reports For You.
We can buy it, download it and install it.
I'm going to go tell the IT team to get this up and running straight away."
So this might sound a little bit familiar to some of you who have worked in traditional IT where sometimes demands can come from the top of the organization and they filter down with really tight timelines.
So let's say for example, the finance manager is going to go along, talk to the IT team and say, "We need this Reports For You application set up by the end of month."
Now the IT team might be a little bit scared because, hey, when we look at the infrastructure we've got, it's supporting those two servers and applications okay, but maybe we don't have much more space.
Maybe we don't have enough storage.
Maybe we are using something like virtualization.
So we might not need to buy a brand new physical server and we can run up a virtual Windows server for the Reports For You application.
But there might just not be enough resources in general.
CPU, memory, storage, whatever it might be to be able to meet the demands of this Reports For You application.
But you've got a timeline.
So you go ahead, you get that server up and running.
You install the applications, the operating system data, all there as quickly as you can to meet these timelines that you've been given by the finance manager.
Now maybe it's not the best server that you've ever built.
It might be a little bit rushed and a little bit squished, but you've managed to get that server up and running with the Reports For You application and you've been able to meet those timelines and provide access to your users.
Now let's say that you've given access to your users for this Reports For You application.
Now let's say when they start that monthly reporting job, the Reports For You application needs to talk to the data across your other two applications, the Aussie Mart Store and the Aussie Mart Comply application.
And it's going to use that data to perform the reporting that the CEO has requested.
So you kick off this report job on a Friday.
You hope that it's going to be complete on a Saturday, but maybe it's not.
You check again on a Sunday and things are starting to get a little bit scary.
And uh-oh, Monday rolls around, the Reports For You report is still running.
It has not yet complete.
And that might not be so great because you don't have a lot of resources on-premises.
And now all of your applications are starting to perform really poorly.
So that Reports For You application is still running.
It's still trying to read data from those other two applications.
And maybe they're getting really, really slow and let's hope not, but maybe the applications even go off entirely.
Now those users are going to become pretty angry.
You're going to get a lot of calls to the help desk saying that things are offline.
And you're probably going to have the finance manager and every other manager reaching out to you saying, this needs to be fixed now.
So let's say you managed to push through, perhaps through the rest of Monday, and that report finally finishes.
You clearly need more resources to be able to run this report much more quickly at the end of each month so that you don't have angry users.
So what are you going to do to fix this for the next month when you need to run the report again?
Well, you might have a think about ordering some new software and hardware because you clearly don't have enough hardware on-premises right now.
You're going to have to wait some time for all of that to be delivered.
And then you're going to have to physically and store it, set it up, get it running, and make sure that you've got everything you need for reports for you to be running with more CPU and resources next time.
There's a lot of different work that you need to do.
This is one of the traditional IT challenges that we might face when the business has demands and expectations for things to happen quickly.
And it's not really necessarily the CEO or the finance manager's fault.
They are focused on what the business needs.
And when you work in the technology teams, you need to do what you can to support them so that the business can succeed.
So how might we do that a little bit differently with cloud?
Well, with cloud, we could sign up for a cloud provider, we could turn on and off servers as needed, and we could scale up and scale down, scale in and scale out resources, all to meet those demands on a monthly basis.
So that could be a lot less work to do and it could certainly provide you the ability to respond much more quickly to the demands that come from the business.
And rather than having to go out and buy all of this new infrastructure that you are only going to use once a month, well, as we're going to learn throughout this course, one of the many benefits of cloud is that you can turn things on and off really quickly and only pay for what you need.
So what might this look like with cloud?
Well, with cloud, what we might do is no longer have that on-premises rushed server that we were using for reports for you.
Instead of that, we can go out to a public cloud provider like AWS, GCP or hopefully Azure, and you can set up those servers once again using a range of different features, products that are all available through the various public cloud providers.
Now, yes, in this scenario, we are still talking about setting up a server.
So that is going to take you some time to configure Windows, set up the application, all of the data and configuration that you require, but at least you don't need to worry about the actual physical infrastructure that is supporting that server.
You don't have to go out, talk to your procurement team, talk to a different providers, wait for different physical infrastructure to be delivered and licensing and software and other assets.
With cloud, as we will learn, you can really quickly get online and up and running.
And also, if we had that need to ensure that the reports for you application was running with lots of different resources at the end of the month, it's much easier when we use cloud to just go and turn some servers on and then maybe turn them off at the end of the month when they are no longer acquired.
This is the sort of thing that we are talking about with cloud.
We're only really just touching on the service about what cloud can do and what cloud actually is.
But my hope is that through this lesson, you can understand how cloud changes things.
Cloud allows us to work with technology in a much different way than we traditionally would work with our on-premises infrastructure.
Another example that shows how cloud is different is that rather than using the reports for you application, what we might in fact actually choose to do is go to a public cloud provider and go to someone that actually has a equivalent reports for you solution that's entirely built in the cloud ready to go.
In this way, not only do we no longer have to manage the underlying physical infrastructure, we don't actually have to manage the application software installation, configuration, and all of that service setup.
With something like a reporting software that's built in the cloud, we would just provide access to our users and only have to pay on a per user basis.
So if you've used something like zoom for meetings or Dropbox for data sharing, that's the sort of solution we're talking about.
So if we consider this scenario for Aussie Mart, we have a think about the benefits that they might access when they use the cloud.
Well, we can much more quickly get access to resources to respond to demand.
If we need to have a lot of different compute capacity working at the end of the month with cloud, like you'll learn, we can easily get access to that.
If we wanted to add lots of users, we could do that much more simply as well.
And something that the finance manager might really be happy about in this scenario is that we aren't going to go back and suggest to them that we need to buy a whole heap of new physical infrastructure right now.
When we think about traditionally how Aussie Mart would have worked with this scenario, they would have to go and buy some new physical servers, resources, storage, networking, whatever that might be, to meet the needs of this reports for you application.
And really, they're probably going to have to strike a balance between having enough infrastructure to ensure that the reports for you application completes its job quickly and not buying too much infrastructure that's just going to be sitting there unused whilst the reports for you application is not working.
And really importantly, when we go to cloud, we see this difference as not having to buy lots of physical infrastructure upfront as being referred to as capital expenditure versus operational expenditure.
Really, what we're just saying here is rather than spending a whole big lump sum all at once to get what you need, you can just pay on a monthly basis for what you need when you need it.
And finally, one of the other benefits that you'll also see is that we're getting a reduction in the amount of different tasks that we have to complete in terms of IT administration, set up of operating systems, management of physical infrastructure, what the procurement team has to manage, and so on.
Again, right now we're just talking really high level about a fictitious scenario for Aussie Mart to help you to understand the types of things and the types of benefits that we can get access to for cloud.
So hopefully if you're embarking on a cloud journey, you're gonna have a happy finance manager, CEO, and other team members that you're working with as well.
Okay, everybody, so that's a wrap to this lesson on why cloud matters.
As I've said, we're really only just scratching the surface.
This is just to introduce you to a scenario that can help you to understand the types of benefits we get access to with cloud.
As we move throughout this course, we'll progressively dive deeper in terms of what cloud is, how you define it, the features you get access to, and other common concepts and terms.
So thanks for joining me, I'll see you there.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk through another type of storage.
This time instant store volumes.
It's essential for all of the AWS exams and real-world usage that you understand the pros and cons for this type of storage.
It can save money, improve performance or it can cause significant headaches so you have to appreciate all of the different factors.
So let's just jump in and get started because we've got a lot to cover.
Instant store volumes provide block storage devices so raw volumes which can be attached to an instance presented to the operating system on that instance and used as the basis for a file system which can then in turn be used by applications.
So far they're just like EBS only local instead of being presented over the network.
These volumes are physically connected to one EC2 host and that's really important.
Each EC2 host has its own instant store volumes and they're isolated to that one particular host.
Instances which are on that host can access those volumes and because they're locally attached they offer the highest storage performance available within AWS much higher than EBS can provide and more on why this is relevant very soon.
They're also included in the price of any instances which they come with.
Different instance types come with different selections of instant store volumes and for any instances which include instant store volumes they're included in the price of that instance so it comes down to use it or lose it.
One really important thing about instant store volumes is that you have to attach them at launch time and unlike EBS you can't attach them afterwards.
I've seen this question come up a few times in various AWS exams about adding new instant store volumes after instance launch and it's important that you remember that you can't do this it's launch time only.
Depending on the instance type you're going to be allocated a certain number of instant store volumes you can choose to use them or not but if you don't you can't adjust this later.
This is how instant store architecture looks.
Each instance can have a collection of volumes which are backed by physical devices on the EC2 host which that instance is running on.
So in this case host A has three physical devices and these are presented as three instant store volumes and host B has the same three physical devices.
Now in reality EC2 hosts will have many more but this is a simplified diagram.
Now on host A instance 1 and 2 are running instance 1 is using one volume and instance 2 is using the other two volumes and the volumes are named ephemeral 0, 1 and 2.
Roughly the same architecture is present on host B but instance 3 is the only instance running on that host and it's using ephemeral 1 and ephemeral 2 volumes.
Now these are ephemeral volumes they're temporary storage as a solutions architect or a developer or an engineer you need to think of them as such.
If instance 1 stored some data on ephemeral volume 0 on EC2 host A let's say a cat picture and then for some reason the instance migrated from host A through to host B then it would still have access to an ephemeral 0 volume but it would be a new physical volume a blank block device.
So this is important if an instance moves between hosts then any data that was present on the instant store volumes is lost and instances can move between hosts for many reasons.
If they're stopped and started this causes a migration between hosts or another example is if host A was undergoing maintenance then instances would be migrated to a different host.
When instances move between hosts they're given new blank ephemeral volumes data on the old volumes is lost they're wiped before being reassigned but the data is gone and even if you do something like change an instance type this will cause an instance to move between hosts and that instance will no longer have access to the same instant store volumes.
This is another risk to keep in mind you should view all instant store volumes as ephemeral.
The other danger to keep in mind is hardware failure if a physical volume fails say the ephemeral 1 volume on EC2 host A then instance 2 would lose whatever data was on that volume.
These are ephemeral volumes treat them as such their temporary data they should not be used for anything where persistence is required.
Now the size of instant store volumes and the number of volumes available to an instance vary depending on the type of instance and the size of instance.
Some instance types don't support instant store volumes different instance types have different types of instance store volumes and as you increase in size you're generally allocated larger numbers of these volumes so that's something that you need to keep in mind.
One of the primary benefits of instance store volumes is performance you can achieve much higher levels of throughput and more IOPS by using instance store volumes versus EBS.
I won't consume your time by going through every example but some of the higher-end figures that you need to consider are things like if you use a D3 instance which is storage optimized then you can achieve 4.6 GB per second of throughput and this instance type provides large amounts of storage using traditional hard disks so it's really good value for large amounts of storage.
It provides much high levels of throughput than the maximums available when using HDD based EBS volumes.
The I3 series which is another storage optimized family of instances these provide NVMe SSDs and this provides up to 16 GB per second of throughput and this is significantly higher than even the most high performance EBS volumes can provide and the difference in IOPS is even more pronounced versus EBS with certain I3 instances able to provide 2 million read IOPS and 1.6 million write IOPS when optimally configured.
In general instance store volumes perform to a much higher level versus the equivalent storage in EBS.
I'll be doing a comparison of EBS versus instance store elsewhere in this section which will help you in situations where you need to assess suitability but these are some examples of the raw figures.
Now before we finish this lesson just a number of exam power-ups.
Instance store volumes are local to an EC2 host so if an instance does move between hosts you lose access to the data on that volume you can only add instance store volumes to an instance at launch time if you don't add them you cannot come back later and add additional instance store volumes and any data on instance store volumes is lost if that instance moves between hosts if it gets resized or if you have either host failure or specific volume hardware failure.
Now in exchange for all these restrictions of course instance store volumes provide high performance so it's the highest data performance that you can achieve within AWS you just need to be willing to accept all of the shortcomings around the risk of data loss its temporary nature and the fact that it can't survive through restarts or moves or resizes.
It's essentially a performance trade-off you're getting much faster storage as long as you can tolerate all of the restrictions.
Now with instance store volumes you pay for it anyway it's included in the price of an instance so generally when you're provisioning an instance which does come with instance store volumes there is no advantage to not utilizing them you can decide not to use them inside the OS but you can't physically add them to the instance at a later date.
Just to reiterate and I'm going to keep repeating this throughout this section of the course instance store volumes are temporary you cannot use them for any data that you rely on or data which is not replaceable so keep that in mind it does give you amazing performance but it is not for the persistent storage of data but at this point that's all of the theory that I wanted to cover so that's the architecture and some of the performance trade-offs and benefits that you get with instance store volumes go ahead and complete this video and when you're ready join me in the next which will be an architectural comparison of EBS and instance store which will help you in exam situations to pick between the two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about the Hard Disk Drive or HDD-based volume types provided by EBS.
HDD-based means they have moving bits, platters which spin little robot arms known as heads which move across those spinning platters.
Moving parts means slower which is why you'd only want to use these volume types in very specific situations.
Now let's jump straight in and look at the types of situations where you would want to use HDD-based storage.
Now there are two types of HDD-based storage within EBS.
Well that's not true, there are actually three but one of them is legacy.
So I'll be covering the two ones which are in general usage.
And those are ST1 which is throughput optimized HDD and SC1 which is cold HDD.
So think about ST1 as the fast hard drive not very agile but pretty fast and think about SC1 as cold.
ST1 is cheap, it's less expensive than the SSD volumes which makes it ideal for any larger volumes of data.
SC1 is even cheaper but it comes with some significant trade-offs.
Now ST1 is designed for data which is sequentially accessed because it's HDD-based it's not great at random access.
It's more designed for data which needs to be written or read in a fairly sequential way.
Applications where throughput and economy is more important than IOPS or extreme levels of performance.
ST1 volumes range from 125 GB to 16 TB in size and you have a maximum of 500 IOPS.
But and this is important IO on HDD-based volumes is measured as 1 MB blocks.
So 500 IOPS means 500 MB per second.
Now their maximums HDD-based storage works in a similar way to how GP2 volumes work with a credit bucket.
Only with HDD-based volumes it's done around MB per second rather than IOPS.
So with ST1 you have a baseline performance of 40 MB per second for every 1 TB of volume size.
And you can burst to a maximum of 250 MB per second for every TB of volume size.
Obviously up to the maximum of 500 IOPS and 500 MB per second.
ST1 is designed for when cost is a concern but you need frequent access storage for throughput intensive sequential workloads.
So things like big data, data warehouses and log processing.
Now ST1 on the other hand is designed for infrequent workloads.
It's geared towards maximum economy when you just want to store lots of data and don't care about performance.
So it offers a maximum of 250 IOPS.
Again this is with a 1 MB IO size.
So this means a maximum of 250 MB per second of throughput.
And just like with ST1 this is based on the same credit pool architecture.
So it has a baseline of 12 MB per TB of volume size and a burst of 80 MB per second per TB of volume size.
So you can see that this offers significantly less performance than ST1 but it's also significantly cheaper.
And just like with ST1 volumes can range from 125 GB to 16 TB in size.
This storage type is the lowest cost EBS storage available.
It's designed for less frequently accessed workloads.
So if you have colder data, archives or anything which requires less than a few loads or scans per day then this is the type of storage volume to pick.
And that's it for HDD based storage.
Both of these are lower cost and lower performance versus SSD.
Designed for when you need economy of data storage.
Picking between them is simple.
If you can tolerate the trade-offs of ST1 then use that.
It's super cheap and for anything which isn't day to day accessed it's perfect.
Otherwise choose ST1.
And if you have a requirement for anything IOPS based then avoid both of these and look at SSD based storage.
With that being said though that's everything that I wanted to cover in this lesson.
Thanks for watching.
Go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to continue my EBS series and talk about provisioned IOPS SSD.
So that means IO1 and IO2.
Let's jump in and get started straight away because we do have a lot to cover.
Strictly speaking there are now three types of provisioned IOPS SSD.
Two which are in general release IO1 and its successor IO2 and one which is in preview which is IO2 Block Express.
Now they all offer slightly different performance characteristics and different prices but the common factors is that IOPS are configurable independent of the size of the volume and they're designed for super high performance situations where low latency and consistency of that low latency are both important characteristics.
With IO1 and IO2 you can achieve a maximum of 64,000 IOPS per volume and that's four times the maximum for GP2 and GP3 and with IO1 and IO2 you can achieve a 1000 MB per second of throughput.
This is the same as GP3 and significantly more than GP2.
Now IO2 Block Express takes this to another level.
With Block Express you can achieve 256,000 IOPS per volume and 4000 MB per second of throughput per volume.
In terms of the volume sizes that you can use with provisioned IOPS SSDs with IO1 and IO2 it ranges from 4 GB to 16 TB and with IO2 Block Express you can use larger up to 64 TB volumes.
Now I mentioned that with these volumes you can allocate IOPS performance values independently of the size of the volume.
Now this is useful for when you need extreme performance for smaller volumes or when you just need extreme performance in general but there is a maximum of the size to performance ratio.
For IO1 it's 50 IOPS per GB of size so this is more than the 3 IOPS per GB for GP2.
For IO2 this increases to 500 IOPS per GB of volume size and for Block Express this is 1000 IOPS per GB of volume size.
Now these are all maximums and with these types of volumes you pay for both the size and the provisioned IOPS that you need.
Now because with these volume types you're dealing with extreme levels of performance there is also another restriction that you need to be aware of and that's the per instance performance.
There is a maximum performance which can be achieved between the EBS service and a single EC2 instance.
Now this is influenced by a few things.
The type of volumes so different volumes have a different maximum per instance performance level, the type of the instance and then finally the size of the instance.
You'll find that only the most modern and largest instances support the highest levels of performance and these per instance maximums will also be more than one volume can provide on its own and so you're going to need multiple volumes to saturate this per instance performance level.
With IO1 volumes you can achieve a maximum of 260,000 IOPS per instance and a throughput of 7,500 MB per second.
It means you'll need just over four volumes of performance operating at maximum to achieve this per instance limit.
Oddly enough IO2 is slightly less at 160,000 IOPS for an entire instance and 4,750 MB per second and that's because AWS have split these new generation volume types.
They've added block express which can achieve 260,000 IOPS and 7,500 MB per second for an instance maximum.
So it's important that you understand that these are per instance maximums so you need multiple volumes all operating together and think of this as a performance cap for an individual EC2 instance.
Now these are the maximums for the volume types but you also need to take into consideration any maximums for the type and size of the instance so all of these things need to align in order to achieve maximum performance.
Now keep these figures locked in your mind it's not so much about the exact numbers but having a good idea about the levels of performance that you can achieve with GP2 or GP3 and then IO1, IO2 and IO2 block express will really help you in real-world situations and in the exam.
Instance store volumes which we're going to be covering elsewhere in this section can achieve even higher performance levels but this comes with a serious limitation in that it's not persistent but more on that soon.
Now as a comparison the per instance maximums for GP2 and GP3 is 260,000 IOPS and 7,000 MB per second per instance.
Again don't focus too much on the exact numbers but you need to have a feel for the ranges that these different types of storage volumes occupy versus each other and versus instance store.
Now you'll be using provisioned IOPS SSD for anything which needs really low latency or sub millisecond latency, consistent latency and higher levels of performance.
One common use case is when you have smaller volumes but need super high performance and that's only achievable with IO1, IO2 and IO2 block express.
Now that's everything that I wanted to cover in this lesson.
Again if you're doing the sysops or developer streams there's going to be a demo lesson where you'll experience the storage performance levels.
For the architecture stream this theory is enough.
At this point though thanks for watching that's everything I wanted to cover go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about two volume types available within AWS GP2 and GP3.
Now GP2 is the default general purpose SSD based storage provided by EBS.
GP3 is a newer storage type which I want to include because I expect it to feature on all of the exams very soon.
Now let's just jump in and get started.
General Purpose SSD storage provided by EBS was a game changer when it was first introduced.
It's high performance storage for a fairly low price.
Now GP2 was the first iteration and it's what I'm going to be covering first because it has a simple but initially difficult to understand architecture.
So I want to get this out of the way first because it will help you understand the different storage types.
When you first create a GP2 volume it can be as small as 1 GB or as large as 16 TB.
And when you create it the volume is created with an I/O credit allocation.
Think of this like a bucket.
So an I/O is one input output operation.
An I/O credit is a 16 kb chunk of data.
So an I/O is one chunk of 16 kilobytes in one second.
If you're transferring a 160 kb file that represents 10 I/O blocks of data.
So 10 blocks of 16 kb.
And if you do that all in one second that's 10 credits in one second.
So 10 I/Ops.
When you aren't using the volume much you aren't using many I/Ops and you aren't using many credits.
During periods of high disc load you're going to be pushing a volume hard and because of that it's consuming more credits.
For example during system boots or backups or heavy database work.
Now if you have no credits in this I/O bucket you can't perform any I/O on the disc.
The I/O bucket has a capacity of 5.4 million I/O credits.
And it fills at the baseline performance rate of the volume.
So what does this mean?
Well every volume has a baseline performance based on its size with a minimum.
So streaming into the bucket at all times is a 100 I/O credits per second refill rate.
This means as an absolute minimum regardless of anything else you can consume 100 I/O credits per second which is 100 I/Ops.
Now the actual baseline rate which you get with GP2 is based on the volume size.
You get 3 I/O credits per second per GB of volume size.
This means that a 100 GB volume gets 300 I/O credits per second refilling the bucket.
Anything below 33.33 recurring GB gets this 100 I/O minimum.
Anything above 33.33 recurring gets 3 times the size of the volume as a baseline performance rate.
Now you aren't limited to only consuming at this baseline rate.
By default GP2 can burst up to 3000 I/Ops so you can do up to 3000 input output operations of 16 kb in one second.
And that's referred to as your burst rate.
It means that if you have heavy workloads which aren't constant you aren't limited by your baseline performance rate of 3 times the GB size of the volume.
So you can have a small volume which has periodic heavy workloads and that's OK.
What's even better is that the credit bucket it starts off full so 5.4 million I/O credits.
And this means that you could run it at 3000 I/Ops so 3000 I/O per second for a full 30 minutes.
And that assumes that your bucket isn't filling up with new credits which it always is.
So in reality you can run at full burst for much longer.
And this is great if your volumes are used initially for any really heavy workloads because this initial allocation is a great buffer.
The key takeaway at this point is if you're consuming more I/O credits than the rate at which your bucket is refilling then you're depleting the bucket.
So if you burst up to 3000 I/Ops and your baseline performance is lower then over time you're decreasing your credit bucket.
If you're consuming less than your baseline performance then your bucket is replenishing.
And one of the key factors of this type of storage is the requirement that you manage all of the credit buckets of all of your volumes.
So you need to ensure that they're staying replenished and not depleting down to zero.
Now because every volume is credited with 3 I/O credits per second for every GB in size, volumes which are up to 1 TB in size they'll use this I/O credit architecture.
But for volumes larger than 1 TB they will have a baseline equal to or exceeding the burst rate of 3000.
And so they will always achieve their baseline performance as standard.
They don't use this credit system.
The maximum I/O per second for GP2 is currently 16000.
So any volumes above 5.33 recurring TB in size achieves this maximum rate constantly.
GP2 is a really flexible type of storage which is good for general usage.
At the time of creating this lesson it's the default but I expect that to change over time to GP3 which I'm going to be talking about next.
GP2 is great for boot volumes, for low latency interactive applications or for dev and test environments.
Anything where you don't have a reason to pick something else.
It can be used for boot volumes and as I've mentioned previously it is currently the default.
Again over time I expect GP3 to replace this as it's actually cheaper in most cases but more on this in a second.
You can also use the elastic volume feature to change the storage type between GP2 and all of the others.
And I'll be showing you how that works in an upcoming lesson if you're doing the CIS Ops or developer associate courses.
If you're doing the architecture stream then this architecture theory is enough.
At this point I want to move on and explain exactly how GP3 is different.
GP3 is also SSD based but it removes the credit bucket architecture of GP2 for something much simpler.
Every GP3 volume regardless of size starts with a standard 3000 IOPS so 3000 16 kB operations per second and it can transfer 125 MB per second.
That standard regardless of volume size and just like GP2 volumes can range from 1 GB through to 16 TB.
Now the base price for GP3 at the time of creating this lesson is 20% cheaper than GP2.
So if you only intend to use up to 3000 IOPS then it's a no brainer.
You should pick GP3 rather than GP2.
If you need more performance then you can pay for up to 16000 IOPS and up to 1000 MB per second of throughput.
And even with those extras generally it works out to be more economical than GP2.
GP3 offers a higher max throughput as well so you can get up to 1000 MB per second versus the 250 MB per second maximum of GP2.
So GP3 is just simpler to understand for most people versus GP2 and I think over time it's going to be the default.
For now though at the time of creating this lesson GP2 is still the default.
In summary GP3 is like GP2 and IO1 which I'll cover soon had a baby.
You get some of the benefits of both in a new type of general purpose SSD storage.
Now the usage scenarios for GP3 are also much the same as GP2.
So virtual desktops, medium sized databases, low latency applications, dev and test environments and boot volumes.
You can safely swap GP2 to GP3 at any point but just be aware that for anything above 3000 IOPS the performance doesn't get added automatically like with GP2 which scales on size.
With GP3 you would need to add these extra IOPS which come at an extra cost and that's the same with any additional throughput.
Beyond the 125 MB per second standard it's an additional extra but still even including those extras for most things this storage type is more economical than GP2.
At this point that's everything that I wanted to cover about the general purpose SSD volume types in this lesson.
Go ahead, complete the lesson and then when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to quickly step through the basics of the Elastic Block Store service known as EBS.
You'll be using EBS directly or indirectly, constantly as you make use of the wider AWS platform and as such you need to understand what it does, how it does it and the product's limitations.
So let's jump in and get started straight away as we have a lot to cover.
EBS is a service which provides block storage.
Now you should know what that is by now.
It's storage which can be addressed using block IDs.
So EBS takes raw physical disks and it presents an allocation of those physical disks and this is known as a volume and these volumes can be written to or read from using a block number on that volume.
Now volumes can be unencrypted or you can choose to encrypt the volume using KMS and I'll be covering that in a separate lesson.
Now you see two instances when you attach a volume to them they see a block device, a raw storage and they can use this to create a file system on top of it such as EXT3, EXT4 or XFS and many more in the case of Linux or alternatively NTFS in the case of Windows.
The important thing to grasp is that EBS volumes appear just like any other storage device to an EC2 instance.
Now storage is provisioned in one availability zone.
I can't stress enough the importance of this.
EBS in one availability zone is different than EBS in another availability zone and different from EBS in another AZ in another region.
EBS is an availability zone service.
It's separate and isolated within that availability zone.
It's also resilient within that availability zone so if a physical storage device fails there's some built-in resiliency but if you do have a major AZ failure then the volumes created within that availability zone will likely fail as will instances also in that availability zone.
Now with EBS you create a volume and you generally attach it to one EC2 instance over a storage network.
With some storage types you can use a feature called Multi-Attach which lets you attach it to multiple EC2 instances at the same time and this is used for clusters but if you do this the cluster application has to manage it so you don't overwrite data and cause data corruption by multiple writes at the same time.
You should by default think of EBS volumes as things which are attached to one instance at a time but they can be detached from one instance and then reattached to another.
EBS volumes are not linked to the instance lifecycle of one instance.
They're persistent.
If an instance moves between different EC2 hosts then the EBS volume follows it.
If an instance stops and starts or restarts the volume is maintained.
An EBS volume is created, it has data added to it and it's persistent until you delete that volume.
Now even though EBS is an availability zone based service you can create a backup of a volume into S3 in the form of a snapshot.
Now I'll be covering these in a dedicated lesson but snapshots in S3 are now regionally resilient so the data is replicated across availability zones in that region and it's accessible in all availability zones.
So you can take a snapshot of a volume in availability zone A and when you do so EBS stores that data inside a portion of S3 that it manages and then you can use that snapshot to create a new volume in a different availability zone.
For example availability zone B and this is useful if you want to migrate data between availability zones.
Now don't worry I'll be covering how snapshots work in detail including a demo later in this section.
For now I'm just introducing them.
EBS can provision volumes based on different physical storage types, SSD based, high performance SSD and volumes based on mechanical disks and it can also provision different sizes of volumes and volumes with different performance profiles all things which I'll be covering in the upcoming lessons.
For now again this is just an introduction to the service.
The last point which I want to cover about EBS is that you'll build using a gigabyte per month metric so the price of one gig for one month would be the same as two gig for half a month and the same as half a gig for two months.
Now there are some extras for certain types of volumes for certain enhanced performance characteristics but I'll be covering that in the dedicated lessons which are coming up next.
For now before we finish this service introduction let's take a look visually at how this architecture fits together.
So we're going to start with two regions in this example that's US-EAST-1 and AP-SOUTH EAST-2 and then in those regions we've got some availability zones AZA and AZB and then another availability zone in AP-SOUTH EAST 2 and then finally the S3 service which is running in all availability zones in both of those regions.
Now EBS as I keep stressing and I will stress this more is availability zone based so in the cut-down example which I'm showing in US-EAST-1 you've got two availability zones and so two separate deployments of EBS one in each availability zone and that's just the same architecture as you have with EC2.
You have different sets of EC2 hosts in every availability zone.
Now visually let's say that you have an EC2 instance in availability zone A.
You might create an EBS volume within that same availability zone and then attach that volume to the instance so critically both of these are in the same availability zone.
You might have another instance which this time has two volumes attached to it and over time you might choose to detach one of those volumes and then reattach it to another instance in the same availability zone and that's doable because EBS volumes are separate from EC2 instances.
It's a separate product with separate life cycles.
Now you can have the same architecture in availability zone B where volumes can be created and then attached to instances in that same availability zone.
What you cannot do and I'm stressing this for the 57th time small print it might not actually be 57 but it's close.
What I'm stressing is that you cannot communicate cross availability zone with storage.
So the instance in availability zone B cannot communicate with and so logically cannot attach to any volumes in availability zone A.
It's an availability zone service so no cross AZ attachments are possible.
Now EBS replicates data within an availability zone so the data on a volume it's replicated across multiple physical devices in that AZ but and this is important again the failure of an entire availability zone is going to impact all volumes within that availability zone.
Now to resolve that you can snapshot volumes to S3 and this means that the data is now replicated as part of that snapshot across AZs in that region so that gives you additional resilience and it also gives you the ability to create an EBS volume in another availability zone from this snapshot.
You can even copy the snapshot to another AWS region in this example AP - Southeastern -2 and once you've copied the snapshot it can be used in that other region to create a volume and that volume can then be attached to an EC2 instance in that same availability zone in that region.
So that at a high level is the architecture of EBS.
Now depending on what course you're studying there will be other areas that you need to deep dive on so over the coming section of the course we're going to be stepping through the features of EBS which you'll need to understand and these will differ depending on the exam but you will be learning everything you need for the particular exam that you're studying for.
At this point that's everything I wanted to cover so go ahead finish this lesson and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
Over the next few lessons and the wider course, we'll be covering storage a lot.
And the exam expects you to know the appropriate type of storage to pick for a given situation.
So before we move on to the AWS specific storage lessons, I wanted to quickly do a refresher.
So let's get started.
Let's start by covering some key storage terms.
First is direct attached or local attached storage.
This is storage, so physical disks, which are connected directly to a device, so a laptop or a server.
In the context of EC2, this storage is directly connected to the EC2 hosts and it's called the instance store.
Directly attached storage is generally super fast because it's directly attached to the hardware, but it suffers from a number of problems.
If the disk fails, the storage can be lost.
If the hardware fails, the storage can be lost.
If an EC2 instance moves between hosts, the storage can be lost.
The alternative is network attached storage, which is where volumes are created and attached to a device over the network.
In on-premises environments, this uses protocols such as iSCSI or Fiber Channel.
In AWS, it uses a product called Elastic Blockstore known as EBS.
Network storage is generally highly resilient and is separate from the instance hardware, so the storage can survive issues which impact the EC2 host.
The next term is ephemeral storage and this is just temporary storage, storage which doesn't exist long-term, storage that you can't rely on to be persistent.
And persistent storage is the next point, storage which exists as its own thing.
It lives on past the lifetime of the device that it's attached to, in this case, EC2 instances.
So an example of ephemeral storage, so temporary storage, is the instance store, so the physical storage that's attached to an EC2 host.
This is ephemeral storage.
You can't rely on it, it's not persistent.
An example of persistent storage in AWS is the network attached storage delivered by EBS.
Remember that, it's important for the exam.
You will get questions testing your knowledge of which types of storage are ephemeral and persistent.
Okay, next I want to quickly step through the three main categories of storage available within AWS.
The category of storage defines how the storage is presented either to you or to a server and also what it can be used for.
Now the first type is block storage.
With block storage, you create a volume, for example, inside EBS and the red object on the right is a volume of block storage and a volume of block storage has a number of addressable blocks, the cubes with the hash symbol.
It could be a small number of blocks or a huge number, that depends on the size of the volume, but there's no structure beyond that.
Block storage is just a collection of addressable blocks presented either logically as a volume or as a blank physical hard drive.
Generally when you present a unit of block storage to a server, so a physical disk or a volume, on top of this, the operating system creates a file system.
So it takes the raw block storage, it creates a file system on top of this, for example, NTFS or EXT3 or many other different types of file systems and then it mounts that, either as a C drive in Windows operating systems or the root volume in Linux.
Now block storage comes in the form of spinning hard disks or SSDs, so physical media that's block storage or delivered as a logical volume, which is itself backed by different types of physical storage, so hard disks or SSDs.
In the physical world, network attached storage systems or storage area network systems provide block storage over the network and a simple hard disk in a server is an example of physical block storage.
The key thing is that block storage has no inbuilt structure, it's just a collection of uniquely addressable blocks.
It's up to the operating system to create a file system and then to mount that file system and that can be used by the operating system.
So with block storage in AWS, you can mount a block storage volume, so you can mount an EBS volume and you can also boot off an EBS volume.
So most EC2 instances use an EBS volume as their boot volume and that's what stores the operating system, and that's what's used to boot the instance and start up that operating system.
Now next up, we've got file storage and file storage in the on-premises world is provided by a file server.
It's provided as a ready-made file system with a structure that's already there.
So you can take a file system, you can browse to it, you can create folders and you can store files on there.
You access the files by knowing the folder structure, so traversing that structure, locating the file and requesting that file.
You cannot boot from file storage because the operating system doesn't have low-level access to the storage.
Instead of accessing tiny blocks and being able to create your own file system as the OS wants to, with file storage, you're given access to a file system normally over the network by another product.
So file storage in some cases can be mounted, but it cannot be used for booting.
So inside AWS, there are a number of file storage or file system-style products.
And in a lot of cases, these can be mounted into the file system of an operating system, but they can't be used to boot.
Now lastly, we have object storage and this is a very abstract system where you just store objects.
There is no structure, it's just a flat collection of objects.
And an object can be anything, it can have attached metadata, but to retrieve an object, you generally provide a key and in return for providing the key and requesting to get that object, you're provided with that object's value, which is the data back in return.
And objects can be anything, there can be binary data, they can be images, they can be movies, they can be cat pictures, like the one in the middle here that we've got of whiskers.
If they can be any data really that's stored inside an object.
The key thing about object storage though is it is just flat storage.
It's flat, it doesn't have a structure.
You just have a container.
In AWS's case, it's S3 and inside that S3 bucket, you have objects.
But the benefits of object storage is that it's super scalable.
It can be accessed by thousands or millions of people simultaneously, but it's generally not mountable inside a file system and it's definitely not bootable.
So that's really important, you understand the differences between these three main types of storage.
So generally in the on-premises world and in AWS, if you want to utilize storage to boot from, it will be block storage.
If you want to utilize high performance storage inside an operating system, it will also be block storage.
If you want to share a file system across multiple different servers or clients or have them accessed by different services, that can often be file storage.
If you want large access to read and write object data at scale.
So if you're making a web scale application, you're storing the biggest collection of cat pictures in the world, that is ideal for object storage because it is almost infinitely scalable.
Now let's talk about storage performance.
There are three terms which you'll see when anyone's referring to storage performance.
There's the IO or block size, the input output operations per second, pronounced IOPS, and then the throughput.
So the amount of data that can be transferred in a given second, generally expressed in megabytes per second.
Now these things cannot exist in isolation.
You can think of IOPS as the speed at which the engine of a race car runs at, the revolutions per second.
You can think of the IO or block size as the size of the wheels of the race car.
And then you can think of the throughput as the end speed of the race car.
So the engine of a race car spins at a certain revolutions, whether you've got some transmission that affect that slightly, but that transmission, that power is delivered to the wheels and based on their size, that causes you to go at a certain speed.
In theory in isolation, if you increase the size of the wheels or increase the revolutions of the engine, you would go faster.
For storage and the analogy I just provided, they're all related to each other.
The possible throughput a storage system can achieve is the IO or the block size multiplied by the IOPS.
As we talk about these three performance aspects, keep in mind that a physical storage device, a hard disk or an SSD, isn't the only thing involved in that chain of storage.
When you're reading or writing data, it starts with the application, then the operating system, then the storage subsystem, then the transport mechanism to get the data to the disk, the network or the local storage bus, such as SATA, and then the storage interface on the drive, the drive itself and the technology that the drive uses.
There are all components of that chain.
Any point in that chain can be a limiting factor and it's the lowest common denominator of that entire chain that controls the final performance.
Now IO or block size is the size of the blocks of data that you're writing to disk.
It's expressed in kilobytes or megabytes and it can range from pretty small sizes to pretty large sizes.
An application can choose to write or read data of any size and it will either take the block size as a minimum or that data can be split up over multiple blocks as it's written to disk.
If your storage block size is 16 kilobytes and you write 64 kilobytes of data, it will use four blocks.
Now IOPS measures the number of IO operations the storage system can support in a second.
So how many reads or writes that a disk or a storage system can accommodate in a second?
Using the car analogy, it's the revolutions per second that the engine can generate given its default wheel size.
Now certain media types are better at delivering high IOPS versus other media types and certain media types are better at delivering high throughput versus other media types.
If you use network storage versus local storage, the network can also impact how many IOPS can be delivered.
Higher latency between a device that uses network storage and the storage itself can massively impact how many operations you can do in a given second.
Now throughput is the rate of data a storage system can store on a particular piece of storage, either a physical disk or a volume.
Generally this is expressed in megabytes per second and it's related to the IO block size and the IOPS but it could have a limit of its own.
If you have a storage system which can store data using 16 kilobyte block sizes and if it can deliver 100 IOPS at that block size, then it can deliver a throughput of 1.6 megabytes per second.
If your application only stores data in four kilobyte chunks and the 100 IOPS is a maximum, then that means you can only achieve 400 kilobytes a second of throughput.
Achieving the maximum throughput relies on you using the right block size for that storage vendor and then maximizing the number of IOPS that you pump into that storage system.
So all of these things are related.
If you want to maximize your throughput, you need to use the right block size and then maximize the IOPS.
And if either of these three are limited, it can impact the other two.
With the example on screen, if you were to change the 16 kilobyte block size to one meg, it might seem logical that you can now achieve 100 megabytes per second.
So one megabyte times 100 IOPS in a second, 100 megabytes a second, but that's not always how it works.
A system might have a throughput cap, for example, or as you increase the block size, the IOPS that you can achieve might decrease.
As we talk about the different AWS types of storage, you'll become much more familiar with all of these different values and how they relate to each other.
So you'll start to understand the maximum IOPS and the maximum throughput levels that different types of storage in AWS can deliver.
And you might face exam questions where you need to answer what type of storage you will pick for a given level of performance demands.
So it's really important as we go through the next few lessons that you pay attention to these key levels that I'll highlight.
It might be, for example, that a certain type of storage can only achieve 1000 IOPS or 64000 IOPS.
Or it might be that certain types of storage cap at certain levels of throughput.
And you need to know those values for the exam so that you can know when to use a certain type of storage.
Now, this is a lot of theory and I'm talking in the abstract and I'm mindful that I don't want to make this boring and it probably won't sink in and you won't start to understand it until we focus on some AWS specifics.
So I am going to end this lesson here.
I wanted to give you the foundational understanding, but over the next few lessons, you'll start to be exposed to the different types of storage available in AWS.
And you will start to paint a picture of when to pick particular types of storage versus others.
So with that being said, that's everything I wanted to cover.
I know this has been abstract, but it will be useful if you do the rest of the lessons in this section.
I promise you this is going to be really valuable for the exam.
So thanks for watching.
Go ahead and complete the video.
When you're ready, you can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this brief demo lesson I want to give you some experience of working with both EC2 instance connect as well as connecting with a local SSH client.
Now these are both methods which are used for connecting to EC2 instances both with public IP version 4 addressing and IP version 6 addressing.
Now to get started we're going to need some infrastructure so make sure that you're logged in as the IAM admin user into the general AWS account which is the management account of the organization and as always you'll need the northern Virginia region selected.
Now in this demonstration you are going to be connecting to an EC2 instance using both instance connect and a local SSH client and to use a local SSH client you need a key pair.
So to create that let's move across to the EC2 console, scroll down on the left and select key pairs.
Now you might already have key pairs created from earlier in the course.
If you have one created which is called A4L which stands for Animals for Life then that's fine.
If you don't we're going to go ahead and create that one.
So click on create key pair and then under name we're going to use A4L.
Now if you're using Windows 10 or Mac OS or Linux then you can select the PEM file format.
If you're using Windows 8 or prior then you might need to use the putty application and to do that you need to select PPK.
But for this demonstration I'm going to assume that you're using the PEM format.
So again this is valid on Linux, Mac OS or any recent versions of Microsoft Windows.
So select PEM and then click on create key pair and when you do it's going to present you with a download.
It's going to want you to save this key pair to your local machine so go ahead and do that.
Once you've done that from the AWS console attached to this lesson is a one-click deployment link.
So I want you to go ahead and click that link.
That's going to move you to a quick create stack screen.
Everything should be pre-populated.
The stack name should be EC2 instance connect versus SSH.
The key name box should already be pre-populated with A4L which is a key that you just created or one which you already had.
Just move down to the very bottom, check the capabilities box and then click on create stack.
Now you're going to need this to be in a create complete state before you continue with the demo lesson.
So pause the video, wait for your stack to change to create complete and then you're good to continue.
Okay so this stacks now in a create complete status and we're good to continue.
Now if we click on the resources tab you'll see that this has created the standard animals for life VPC and then it's also created a public EC2 instance.
So this is an EC2 instance with a public IP version 4 address that we can use to connect to.
So that's what we're going to do.
So click on services and then select EC2 to move to the EC2 console.
Once you're there click on instances running and you should have a single EC2 instance A4L-publicEC2.
Now the two different ways which I want to demonstrate connecting to this instance in this demo lesson are using a local SSH client and key based authentication and then using the EC2 instance connect method.
And I want to show you how those differ and give you a few hints and tips which might come in useful for production usage and for the exams.
So if we just go ahead and select this instance and then click on the security tab you'll see that we have this single security group which is associated to this instance.
Now make sure the inbound rules is expanded and just have a look at what network traffic is allowed by this security group.
So the first line allows port 80 TCP which is HTTP and it allows that to connect to the instance from any source IP address specifically IP version 4.
We can tell it's IP version 4 because it's 0.0.0.0/0 which represents any IP version 4 address.
Next we allow port 22 using TCP and again using the IP version 4 any IP match and this is the entry which allows SSH to connect into this instance using IP version 4.
And then lastly we have a corresponding line which allows SSH using IP version 6.
So we're allowing any IP address to connect using SSH to this EC2 instance.
And so connecting to it using SSH is relatively simple.
We can right click on this instance and select connect and then choose SSH client and AWS provides us with all of the relevant information.
Now note how under step number three we have this line which is chmod space 400 space a4l.pm.
I want to demonstrate what happens if we attempt to connect without changing the permissions on this key file.
So to do that right at the bottom is an example command to connect to this instance.
So just copy that into your clipboard.
Then I want you to move to your command prompt or terminal.
In my case I'm running macOS so I'm using a terminal application.
Then you'll need to move to the folder where you have the PEM file stored or where you just downloaded it in one of the previous steps.
I'm going to paste in that command which I just copied onto my clipboard.
This is going to use the a4l.pm file as the identity information and then it's going to connect to the instance using the EC2-user local Linux user.
And this is the host name that it's going to connect to.
So this is my EC2 instance.
Now I'm going to press enter and attempt that connection.
First it will ask me to verify the authenticity of this server.
So this is an added security method.
This is getting the fingerprint of this EC2 instance.
And it means that if we independently have a copy of this fingerprint, say from the administrator of the server that we're connecting to, then we can verify that we're connecting to that same server.
Because it's possible that somebody could exploit DNS and replace a legitimate DNS name with one which points at a non-legitimate server.
So that's important.
You can't always rely on a DNS name.
DNS names can be adjusted to point at different IP addresses.
So this fingerprint is a method that you can use to verify that you're actually connecting to the machine or the instance which you think you are.
Now in this case, because we've just created this EC2 instance, we can be relatively certain that it is valid.
So we're just going to go ahead and type yes and press enter.
And then it will try to connect to this instance.
Now immediately in my case, I got an error.
And this error is going to be similar if you're using macOS or Linux.
If you're using Windows, then there is a chance that you will get this error or won't.
And if you do get it, it might look slightly different.
But look for the keyword of permissions.
If you see that you have a permissions problem with your key, then that's the same error as I'm showing on my screen now.
Basically what this means is that the SSH client likes it when the permissions on these keys are restricted, restricted to only the user that they belong to.
Now in my case, the permissions on this file are 644.
And this represents my user, my group, and then everybody.
So this means this key is accessible to other users on my local system.
And that's far too open to be safe when using local SSH.
Now in Windows, you might have a similar situation where other users of your local machine have read permissions on this file.
What this error is telling us to do is to correct those permissions.
So if we go back to the AWS console, this is the command that we need to run to correct those permissions.
So copy that into your clipboard, move back to your terminal, paste that in, and press enter.
And that will correct those permissions.
Now under Windows, the process is that you need to edit the permissions of that file.
So right click properties and then edit the security.
And you need to remove any user access to that file other than your local user.
And that's the same process that we've just done here, only in Windows it's GUI based.
And under Mac OS or Linux, you use CHmod.
So now that we've adjusted those permissions, if I use the up arrow to go back to the previous command and press enter, I'm able to connect to the CC2 instance.
And that's using the SSH client.
To use the SSH client, you need to have network connectivity to the CC2 instance.
And you need to have a valid SSH key pair.
So you need the key stored on your local machine.
Now this can present scalability issues because if you need to have a large team having access to this instance, then everybody in that team need a copy of this key.
And so that does present admin problems if you're doing it at scale.
Now in addition to this, because you're connecting using an SSH client from your local machine, you need to make sure that the security group of this instance allows connections from your local machines.
So in this case, it allows connections from any source IP address into this instance.
And so that's valid for my IP address.
You need to make sure that the security group on whichever instance you're attempting to connect to allows your IP address as a minimum.
Now another method that you can use to connect to EC2 is EC2 instance connect.
Now to use that, we right click, we select connect, and we have a number of options at the top.
One of these is the SSH client that we've just used.
Another one is EC2 instance connect.
So if we select this option, we're able to connect to this instance.
It shows us the instance ID, it shows us the public IP address, and it shows us the user to connect into this instance with.
Now AWS attempt to automatically determine the correct user to use.
So when you launch an instance using one of the default AMIs, then it tends to pick correctly.
However, if you generate your own custom AMI, it often doesn't guess correctly.
And so you need to make sure that you're using the correct username when connecting using this method.
But once you've got the correct username, you can just go ahead and click on connect, and then it will open a connection to that instance using your web browser.
It'll take a few moments to connect, but once it has connected, you'll be placed at the terminal of this EC2 instance in exactly the same way as you were when using your local SSH.
Now one difference you might have noticed is at no point where you prompted to provide a key.
When you're using EC2 instance connect, you're using AWS permissions to connect into this instance.
So because we're logged in using an admin user, we have those permissions, but you do need relevant permissions added to the identity of whoever is using instance connect to be able to connect into the instance.
So this is managed using identity policies on the user, the group or the role, which is attempting to access this instance.
Now one important element of this, which I want to demonstrate, if we go back to instances and we select the instance, click on security, and then click on the security group, which is associated with this instance.
Scroll down, click on edit inbound rules, and then I want you to locate the inbound rule for IP version 4 SSH, SSH TCP 22, and then it's using this catchall, so 0.0.0.0/0, which represents any IP version 4 address.
So go ahead and click on the cross to remove that, and then on that same line in the source area, click on this drop down and change it to my IP.
So this is my IP address, yours will be different, but then we're going to go ahead and save that rule.
Now just close down the tab that you've got connected to instance connect, move back to the terminal, and type exit to disconnect from that instance, and then just rerun the previous command.
So connect back to that instance using your local SSH client.
You'll find that it does reconnect because logically enough, this connection is coming from your local IP address, and you've changed the security group to allow connections from that address, so it makes sense that this connection still works.
Moving back to the console though, let's go to the EC2 dashboard, go to running instances, right click on this instance, go to connect, select EC2 instance connect, and then click on connect and just observe what happens.
Now you might have spent a few minutes waiting for this to connect, and you'll note that it doesn't connect.
Now this might seem strange at this point because you're connecting from a web browser, which is running on your local machine.
So it makes sense that if you can connect from your local SSH client, which is also running on your local machine, you should be able to connect using EC2 instance connect.
Now this might seem logical, but the crucial thing about EC2 instance connect is that it's not actually originating connections from your local machine.
What's happening is that you're making a connection through to AWS, and then once your connection arrives at AWS, the EC2 instance connect service is then connecting to the EC2 instance.
Now what you've just done is you've edited the security group of this instance to only allow your local IP address to connect, and this means that the EC2 instance connect service can no longer connect to this instance.
So what you need in order to allow the EC2 instance connect service to work is you either need to allow every source IP address, so 0.0.0.0.0/0, but of course that's bad practice for production usage.
It's much more secure if you go to this URL, and I'll make sure that I include this attached to this lesson.
This is a list of all of the different IP ranges which AWS use for their services.
Now because I have this open in Firefox, it might look a little bit different.
If I just go to raw data, that might look the same as your browser.
If you're using Firefox, you have the ability to open this as a JSON document.
Both of them show the same data, but when it's JSON, you have the ability to collapse these individual components.
But the main point about this document is that this contains a list of all of the different IP addresses which are used in each different region for each different service.
So if we wanted to allow EC2 instance connect for a particular region, then we might search for instance, locate any of these items which have EC2 instance connect as the service, and then just move through them looking for the one which matches the region that we're using.
Now in my case, I'm using US East One, so I'd scroll through all of these IP address ranges looking for US East One.
There we go, I've located it.
It's using this IP address range.
So I might copy this into my clipboard, move back to the EC2 console, select the instance, click on security, select the security group of this instance, scroll down, edit the inbound rules, remove the entry for my IP address, paste in the entry for the EC2 instance connect service, and then save that rule.
And now what you'll find if you move back to your terminal and try to interact with this instance, you might be able to initially because the connection is still established, but if you exit and then attempt to reconnect, this time you'll see that you won't be able to connect because now your local IP address is no longer allowed to connect to this instance.
However, if you move back to the AWS console, go to the dashboard and then instance is running, right click on the instance and put connect, select instance connect and then click on connect.
Now you'll be allowed to connect using EC2 instance connect.
And the reason for that just to reiterate is that you've just edited the security group of this EC2 instance and you've allowed the IP address range of the EC2 instance connect service.
So now you can connect to this instance and you could do so at scale using AWS permissions.
So I just wanted to demonstrate how both of those connection methods work, both instance connect and using a local SSH client.
That's everything I wanted to cover.
So just go ahead and move back to the CloudFormation console, select this stack that you created using the one click deployment, click on delete and then confirm that process.
And that will clear up all of the infrastructure that you've used in this demo lesson.
At this point though, that's everything I wanted to cover.
So go ahead, complete this video and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Now this is an overview of all of the different categories of instances, and then for each category, the most popular or current generation types that are available.
Now I created this with the hope that it will help you retain this information.
So this is the type of thing that I would generally print out or keep an electronic copy of and refer to constantly as we go through the course.
By doing so, whenever we talk about particular size and type and generation of instance, if you refer to the details in notes column, you'll be able to start making a mental association between the type and then what additional features you get.
So for example, if we look at the general error post category, we've got three main entries in that category.
We've got the A1 and M6G types, and these are a specific type of instance that are based on ARM processors.
So the A1 uses the AWS designed Graviton ARM processor, and the M6G uses the generation 2, so Graviton 2 ARM based processor.
And using ARM based processors, as long as you've got operating systems and applications that can run under the architecture, they can be very efficient.
So you can use smaller instances with lower cost and achieve really great levels of performance.
The T3 and T3A instance types, they're burstable instances.
So the assumption with those type of instances is that your normal CPU load will be fairly low, and you have an allocation of burst credits that allows you to burst up to higher levels occasionally, but then return to that normally low CPU level.
So this type of instance, T3 and T3A, are really good for machines which have low normal loads with occasional bursts, and they're a lot cheaper than the other type of general purpose instances.
Then we've got M5, M5A and M5N.
So M5 is your starting point, M5A uses the AMD architecture, whereas normal M5s just use Intel, and these are your steady state general instances.
So if you don't have a burst requirement, if you're running a certain type of application server, which requires consistent steady state CPU, then you might use the M5 type.
So maybe a heavily used exchange email server that runs normally at 60% CPU utilization, that might be a good candidate for M5.
But if you've got a domain controller or an email relay server that normally runs maybe at 2%, 3% with occasional burst, up to 20% or 30% or 40%, then you might want to run a T type instance.
Now, not to go through all of these in detail, we've got the computer optimized category with the C5 and C5N, and they go for media encoding, scientific modeling, gaming servers, general machine learning.
For memory optimized, we start off with R5 and R5A.
If you want to use really large in-memory applications, you've got the X1 and the X1E.
If you want the highest memory of all A to the U instances, you've got the high memory series.
You've got the Z1D, which comes with large memory and NVMe storage.
Then Accelerate Computing, these are the ones that come with these additional capabilities.
So the P3 type and G4 type, those come with different types of GPUs.
So the P type is great for parallel processing and machine learning.
The P type is kind of okay for machine learning and much better for graphics intensive requirements.
You've got the F1 type, which comes with field programmable gate rays, which is great for genomics, financial analysis and big data, anything where you want to program the hardware to do specific tasks.
You've got the Inf1 type, which is relatively new, custom designed for machine learning, so recommendation for casting analysis, voice conversation, anything machine learning related, look at using that type, and then storage optimalities.
So these come with high speed, local storage, and depending on the type you pick, you can get high throughput or maximum IO or somewhere in between.
So keep this somewhere safe, printed out, keep it electronically, and as we go through the course and use the different type of instances, refer to this and start making the mental association between what a category is, what instance types are in that category, and then what benefits they provide.
Now again, don't worry about memorizing all of this in the exam, you don't need it, I'll draw out anything specific that you need as we go through the course, but just try to get a feel for which letters are in which categories.
If that's the minimum that you can do, if I can give you a letter like the T type, or the C type, or the R type, if you can try and understand the mental association which category that goes into, that will be a great step.
And there are ways we can do this, we can make these associations, so C stands for compute, R stands for RAM, which is a way for describing memory, we've got I which stands for IO, D which stands for dense storage, G which stands for GPU, P which stands for parallel processing, there's lots of different mind tricks and mental association that we can do, and as we go through the course, I'll try and help you with that, but as a minimum, either print this out or store it somewhere safe, and refer to it as we go through the course.
The key thing to understand though is how picking an instance type is specific to a particular type of computing scenario.
So if you've got an application that requires maximum CPU, look at compute optimized, if you need memory, look at memory optimized, if you've got a specific type of acceleration, look at accelerated computing, start off in the general purpose instance types, and then go out from there as you've got a particular requirement to.
Now before we finish up, I did want to demonstrate two really useful sites that I refer to constantly, I'll include links to both of these in a lesson text.
The first one is the Amazon documentation site for Amazon EC2 instance types, this gives you a follow-up view of all the different categories of EC2 instances.
You can look in a category, a particular family and generation of instance, so T3, and then in there you can see the use cases that this is suited to, any particular features, and then a list of each instance size and exactly what allocation of resources that you get and then any particular notes that you need to be aware of.
So this is definitely something you should refer to constantly, especially if you're selecting instances to use for production usage.
This other website is something similar, it's EC2incidences.info, and it provides a really great sortable list which can be filtered and adjusted with different attributes and columns, which give you an overview of exactly what each instance provides.
So you can either search for a particular type of instance, maybe a T3, and then see all the different sizes and capabilities of T3, as well as that you can see the different costings for those instance types, so Linux on demand, Linux reserve, Windows on demand, Windows reserve, and we'll talk about what this reserve column is later in the course.
You can also click on columns and show different data for these different instance types, so if I scroll down you can see which offer EBS optimization, you can see which operating systems these different instances are compatible with, you've got a lot of options to manipulate this data.
I find this to be one of the most useful third-party sites, I always refer back to this when I'm doing any consultancy, so this is a really great site.
And again it will go into the lesson text so definitely as you're going through the course, experiments and have a play around with this data, and just start to get familiar with the different capabilities of the different types of EC2 instances.
With that being said, that's everything I wanted to cover in this lesson, you've done really well, and there's been a lot of theory, but it will come in handy in the exam and real-world version usage.
So go ahead, complete this video, and when you're ready, you can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I'm going to talk about the various different types of EC2 instances.
I've described an EC2 instance before as an operating system plus an allocation of resources.
Well, by selecting an instance type and size, you have granular control over what that resource configuration is, picking appropriate resource amounts and instance capabilities to mean the difference between a well-performing system and one which causes a bad customer experience.
Don't expect this lesson though to give you all the answers.
Understanding instance types is something which will guide your decision-making process.
Given a situation, two AWS people might select two different instance types for the same implementation.
The key takeaway from this lesson will be that you don't make any bad decisions and you have an awareness of the strengths and weaknesses of the different types of instances.
Now, I've seen this occasionally feature on the exam in a form where you're presented with a performance problem and one answer is to change the instance type.
So, to minimum with this lesson, I'd like you to be able to answer that type of question.
So, know for example whether a C type instance is better in a certain situation than an M type instance.
If that's what I want to achieve, we've got a lot to get through, so let's get started.
At a really high level, when you choose an EC2 instance type, you're doing so to influence a few different things.
First, logically, the raw amount of resources that you get.
So, that's virtual CPU, memory, local storage capacity and the type of that storage.
But beyond the raw amount, it's also the ratios.
Some type of instances give you more of one and less of the other.
Instance types suited to compute applications, for instance, might give you more CPU and less memory for a given dollar spend.
An instance designed for in-memory caching might be the reverse.
They prioritize memory and give you lots of that for every dollar that you spend.
Picking instance types and sizes, of course, influences the raw amount that you pay per minute.
So, you need to keep that in mind.
I'm going to demonstrate a number of tools that will help you visualize how much something's going to cost, as well as what features you get with it.
So, look at that at the end of the lesson.
The instance type also influences the amount of network bandwidth for storage and data networking capability that you get.
So, this is really important.
When we move on to talking about elastic block store, for example, that's a network-based storage product in AWS.
And so, for certain situations, you might provision volumes with a really high level of performance.
But if you don't select an instance appropriately and pick something that doesn't provide enough storage network bandwidth, then the instance itself will be the limiting factor.
So, you need to make sure you're aware of the different types of performance that you'll get from the different instances.
Picking an instance type also influences the architecture of the hardware that the instance has run on and potentially the vendor.
So, you might be looking at the difference between an ARM architecture or an X86 architecture.
You might be picking an instance type that provides Intel-based CPUs or AMD CPUs.
Instance type selection can influence in a very nuanced and granular way exactly what hardware you get access to.
Picking an appropriate type of instance also influences any additional features and capabilities that you get with that instance.
And this might be things such as GPUs for graphics processing or FPGAs, which are field-programmeable gator-rays.
And if you think of these as a special type of CPU that you can program the hardware to perform exactly how you want.
So, it's a super customizable piece of compute hardware.
And so, certain types of instances come up with these additional capabilities.
So, it might come with an allocation of GPUs or it might come with a certain capacity of FPGAs.
And some instance types don't come with either.
You need to learn which to pick for a given type of workload.
Easy to instance is a group into five main categories which help you select an instance type based on a certain type of workload.
But we've got five main categories.
The first is general purpose.
And this is and always should be your starting point.
Instances which fall into this category are designed for your default steady-state workloads.
They've got fairly even resource ratios, so generally assigned in an appropriate way.
So, for a given type of workload, you get an appropriate amount of CPU and a certain amount of memory which matches that amount of CPU.
So, instances in the general purpose category should be used as your default and you only move away from that if you've got a specific workload requirement.
We've also got the compute optimized category and instances that are in this category are designed for media processing, high-performance computing, scientific modeling, gaming, machine learning.
And they provide access to the latest high-performance CPUs.
And they generally offer a ratio and more CPU is offered in memory for a given price point.
The memory optimized category is logically the inverse of this, so offering large memory allocations for a given dollar or CPU amount.
This category is ideal for applications which need to work with large in-memory data sets, maybe in-memory caching or some other specific types of database workloads.
The accelerated computing category is where these additional capabilities come into play, such as dedicated GPUs for high-scale parallel processing and modeling, or the custom programmable hardware, such as FPGAs.
Now, these are niche, but if you're in one of the situations where you need them, then you know you need them.
So, when you've got specific niche requirements, the instance type you need to select is often in the accelerated computing category.
Finally, there's the storage optimized category and instances in this category generally provide large amounts of superfast local storage, either designed for high sequential transfer rates or to provide massive amounts of IO operations per second.
And this category is great for applications with serious demands on sequential and random IO, so things like data warehousing, elastic search, and certain types of analytic workloads.
Now, one of the most confusing things about EC2 is the naming scheme of the instance types.
This is an example of a type of EC2 instance.
While it might initially look frustrating, once you understand it, it's not that difficult to understand.
So, while our friend Bob is a bit frustrated at understanding difficulty, understanding exactly what this means, by the end of this part of the lesson, you will understand how to decode EC2 instance types.
The whole thing, end to end, so R5, DN, .8x, large, this is known as the instance type.
The whole thing is the instance type.
If a member of your operations team asks you what instance you need or what instance type you need, if you use the full instance type, you unambiguously communicate exactly what you need.
It's a mouthful to say R5, DN, .8x, large, but it's precise and we like precision.
So, when in doubt, always give the full instance type an answer to any question.
The letter at the start is the instance family.
Now, there are lots of examples of this, the T family, the M family, the I family, and the R family.
There's lots more, but each of these are designed for a specific type or types of computing.
Nobody expects you to remember all the details of all of these different families, but if you can start to try to remember the important ones, I'll mention these as we go through the course, then it will put you in a great position in the exam.
If you do have any questions where you need to identify if an instance type is used appropriately or not, as we go through the course and I give demonstrations which might be using different instance families, I will be giving you an overview of their strengths and their weaknesses.
The next part is the generation.
So, the number five in this case is the generation.
AWS iterate often.
So, if you see instance type starting with R5 or C4 as two examples, the C or the R, as you now know, is the instance family and the number is the generation.
So, the C4, for example, is the fourth generation of the C family of instance.
That might be the current generation, but then AWS come along and replace it with the C5, which is generation five, the fifth generation, which might bring with it better hardware and better price to performance.
Generally, with AWS, always select the most recent generation.
It almost always provides the best price to performance option.
The only real reason is not to immediately use the latest generation, as if it's not available in your particular region or if your business has fairly rigorous test processes that need to be completed before you get the approval to use a particular new type of instance.
So, that's the R-part cupboard, which is the family, and the five-part cupboard, which is the generation.
Now, across to the other side, we've got the size.
So, in this case, 8x large or 8x large, this is the instance size.
Within a family and a generation, there are always multiple sizes of that family and generation, which determine how much memory and how much CPU the instance is allocated with.
Now, there's a logical and often linear relationship between these sizes.
So, depending on the family and generation, the starting point can be anywhere as small as the nano.
Next to the nano, there's micro, then small, then medium, large, extra large, 2x large, 4x large, 8x large, and so on.
Now, keep in mind, there's often a price premium towards the higher end.
So, it's often better to scale systems by using a larger number of smaller instance sizes.
But more on that later when we talk about high availability and scaling.
Just be aware, as far as this section of the course goes, that for a given instance family and generation, you're able to select from multiple different sizes.
Now, the bit which is in the middle, this can vary.
There might be no letters between the generation and size, but there's often a collection of letters which denote additional capabilities.
Common examples include a lowercase a, which signifies amdcpu, so lowercase b, which signifies NVMe storage, lowercase n, which signifies network optimized, lowercase e, for extra capacity, which could be RAM or storage.
So, these additional capabilities are not things that you need to memorize, but as you get experience using AWS, you should definitely try to mentally associate them in your mind with what extra capabilities they provide.
Because time is limited in an exam, the more that you can commit to memory than know instinctively, the better you'll be.
Okay, so this is the end of part one of this lesson.
It was getting a little bit on the long side, and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So, go ahead, complete the video, and when you're ready, join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, now that we've covered virtualization at a high level, I want to focus on the architecture of the EC2 product in more detail.
EC2 is one of the services you'll use most often in AWS since one which features on a lot of exam questions.
So let's get started.
First thing, let's cover some key, high level architectural points about EC2.
EC2 instances are virtual machines, so this means an operating system plus an allocation of resources such as virtual CPU, memory, potential some local storage, maybe some network storage, and access to other hardware such as networking and graphics processing units.
EC2 instances run on EC2 hosts, and these are physical servers hardware which AWS manages.
These hosts are either shared hosts or dedicated hosts.
Shared hosts are hosts which are shared across different AWS customers, so you don't get any ownership of the hardware and you pay for the individual instances based on how long you run them for and what resources they have allocated.
It's important to understand, though, that every customer when using shared hosts are isolated from each other, so there's no visibility of it being shared.
There's no interaction between different customers, even if you're using the same shared host.
And shared hosts are the default.
With dedicated hosts, you're paying for the entire host, not the instances which run on it.
It's yours.
It's dedicated to your account, and you don't have to share it with any other customers.
So if you pay for a dedicated host, you pay for that entire host, you don't pay for any instances running on it, and you don't share it with other AWS customers.
EC2 is an availability zone resilient service.
The reason for this is that hosts themselves run inside a single availability zone.
So if that availability zone fails, the hosts inside that availability zone could fail, and any instances running on any hosts that fail will themselves fail.
So as a solutions architect, you have to assume if an AZ fails, then at least some and probably all of the instances that are running inside that availability zone will also fail or be heavily impacted.
Now let's look at how this looks visually.
So this is a simplification of the US East One region.
I've only got two AZs represented, AZA and AZB.
And in AZA, I've represented that I've got two subnet, subnet A and subnet B.
Now inside each of these availability zones is an EC2 host.
Now these EC2 hosts, they run within a single AZ.
I'm going to keep repeating that because it's critical for the exam and you're thinking about EC2 in the exam.
Keep thinking about it being an AZ resilient service.
If you see EC2 mentioned in an exam, see if you can locate the availability zone details because that might factor into the correct answer.
Now EC2 hosts have some local hardware, logically CPU and memory, which you should be aware of, but also they have some local storage called the instance store.
The instance store is temporary.
If an instance is running on a particular host, depending on the type of the instance, it might be able to utilize this instance store.
But if the instance moves off this host to another one, then that storage is lost.
And they also have two types of networking, storage networking and data networking.
When instances are provisioned into a specific subnet within a VPC, what's actually happening is that a primary elastic network interface is provisioned in a subnet, which maps to the physical hardware on the EC2 host.
Remember, subnets are also in one specific availability zone.
Instances can have multiple network interfaces, even in different subnets, as long as they're in the same availability zone.
Everything about EC2 is focused around this architecture, the fact that it runs in one specific availability zone.
Now EC2 can make use of remote storage so an EC2 host can connect to the elastic block store, which is known as EBS.
The elastic block store service also runs inside a specific availability zone.
So the service running inside availability zone A is different than the one running inside availability zone B, and you can't access them cross zone.
EBS lets you allocate volumes and volumes of portions of persistent storage, and these can be allocated to instances in the same availability zone.
So again, it's another area where the availability zone matters.
What I'm trying to do by keeping repeating availability zone over and over again is to paint a picture of a service which is very reliant on the availability zone that it's running in.
The host is in an availability zone.
The network is per availability zone.
The persistent storage is per availability zone.
Even availability zone in AWS experiences major issues, it impacts all of those things.
Now an instance runs on a specific host, and if you restart the instance, it will stay on a host.
Instances stay on a host until one of two things happen.
Firstly, the host fails or is taken down for maintenance for some reason by AWS.
Or secondly, if an instance is stopped and then started, and that's different than just restarting, so I'm focusing on an instance being stopped and then being started, so not just a restart.
If either of those things happen, then an instance will be relocated to another host, but that host will also be in the same availability zone.
Instances cannot natively move between availability zones.
Everything about them, their hardware, networking and storage is locked inside one specific availability zone.
Now there are ways you can do a migration, but it essentially means taking a copy of an instance and creating a brand new one in a different availability zone, and I'll be covering that later in this section where I talk about snapshots and AMIs.
What you can never do is connect network interfaces or EBS storage located in one availability zone to an EC2 instance located in another.
EC2 and EBS are both availability zone services.
They're isolated.
You cannot cross AZs with instances or with EBS volumes.
Now instances running on an EC2 host share the resources of that host.
And instances of different sizes can share a host, but generally instances of the same type and generation will occupy the same host.
And I'll be talking in much more detail about instance types and sizes and generations in a lesson that's coming up very soon.
But when you think about an EC2 host, think that it's from a certain year and includes a certain class of processor and a certain type of memory and a certain type and configuration of storage.
And instances are also created with different generations, different versions that you apply specific types of CPU memory and storage.
So it's logical that if you provision two different types of instances, they may well end up on two different types of hosts.
So a host generally has lots of different instances from different customers of the same type, but different sizes.
So before we finish up this lesson, I want to answer a question.
That question is what's EC2 good for?
So what types of situations might you use EC2 for?
And this is equally valuable when you're evaluating a technical architecture while you're answering questions in the exam.
So first, EC2 is great when you've got a traditional OS and application compute need.
So if you've got an application that requires to be running on a certain operating system at a certain runtime with certain configuration, maybe your internal technical staff are used to that configuration, or maybe your vendor has a certain set of support requirements.
EC2 is a perfect use case for this type of scenario.
And it's also great for any long running compute needs.
There are lots of other services inside AWS that provide compute services, but many of these have got runtime limits.
So you can't leave these things running consistently for one year or two years.
With EC2, it's designed for persistent, long running compute requirements.
So if you have an application that runs constantly 24/7, 365, and needs to be running on a normal operating system, Linux or Windows, then EC2 is the default and obvious choice for this.
If you have any applications, which is server style applications, so traditional applications they expect to be running in an operating system, waiting for incoming connections, then again, EC2 is a perfect service for this.
And it's perfect for any applications or services that need burst requirements or steady state requirements.
There are different types of EC2 instances, which are suitable for low levels of normal loads with occasional bursts, as well as steady state load.
So again, if your application needs an operating system, and it's not bursty needs or consistent steady state load, then EC2 should be the first thing that you review.
EC2 is also great for monolithic application stack.
So if your monolithic application requires certain components, a stack, maybe a database, maybe some middleware, maybe other runtime based components, and especially if it needs to be running on a traditional operating system, EC2 should be the first thing that you look at.
And EC2 is also ideally suited for migrating application workloads, so application workloads, which expect a traditional virtual machine or server style environment, or if you're performing disaster recovery.
So if you have existing traditional systems which run on virtual servers, and you want to provision a disaster recovery environment, then EC2 is perfect for that.
In general, EC2 tends to be the default compute service within AWS.
There are lots of niche requirements that you might have.
And if you do have those, there are other compute services such as the elastic container service or Lambda.
But generally, if you've got traditional style workloads, or you're looking for something that's consistent, or if it requires an operating system, or if it's monolithic, or if you migrated into AWS, then EC2 is a great default first option.
Now in this section of the course, I'm covering the basic architectural components of EC2.
So I'm gonna be introducing the basics and let you get some exposure to it, and I'm gonna be teaching you all the things that you'll need for the exam.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this first lesson of the EC2 section of the course, I want to cover the basics of virtualization as briefly as possible.
EC2 provides virtualization as a service.
It's an infrastructure as a service or I/O product.
To understand all the value it provides and why some of the features work the way that they do, understanding the fundamentals of virtualization is essential.
So that's what this lesson aims to do.
Now, I want to be super clear about one thing.
This is an introduction level lesson.
There's a lot more to virtualization than I can talk about in this brief lesson.
This lesson is just enough to get you started, but I will include a lot of links in the lesson description if you want to learn more.
So let's get started.
We do have a fair amount of theory to get through, but I promise when it comes to understanding how EC2 actually works, this lesson will be really beneficial.
Virtualization is the process of running more than one operating system on a piece of physical hardware, a server.
Before virtualization, the architecture looked something like this.
A server had a collection of physical resources, so CPU and memory, network cards and maybe other logical devices such as storage.
And on top of this runs a special piece of software known as an operating system.
That operating system runs with a special level of access to the hardware.
It runs in privilege mode, or more specifically, a small part of the operating system runs in privilege mode, known as the kernel.
The kernel is the only part of the operating system, the only piece of software on the server that's able to directly interact with the hardware.
Some of the operating system doesn't need this privilege level of access, but some of it does.
Now, the operating system can allow other software to run such as applications, but these run in user mode or unprivileged mode.
They cannot directly interact with the hardware, they have to go through the operating system.
So if Bob or Julie are attempting to do something with an application, which needs to use the system hardware, that application needs to go through the operating system.
It needs to make a system call.
If anything but the operating system attempts to make a privileged call, so tries to interact with the hardware directly, the system will detect it and cause a system-wide error, generally crashing the whole system or at minimum the application.
This is how it works without virtualization.
Virtualization is how this is changed into this.
A single piece of hardware running multiple operating systems.
Each operating system is separate, each runs its own applications.
But there's a problem, CPU at least at this point in time, could only have one thing running as privileged.
A privileged process member has direct access to the hardware.
And all of these operating systems, if they're running in their unmodified state, they expect to be running on their own in a privileged state.
They contain privileged instructions.
And so trying to run three or four or more different operating systems in this way will cause system crashes.
Virtualization was created as a solution to this problem, allowing multiple different privileged applications to run on the same hardware.
But initially, virtualization was really inefficient, because the hardware wasn't aware of it.
Virtualization had to be done in software, and it was done in one of two ways.
The first type was known as emulated virtualization or software virtualization.
With this method, a host operating system still ran on the hardware and included additional capability known as a hypervisor.
The software ran in privileged mode, and so it had full access to the hardware on the host server.
Now, around the multiple other operating systems, which we'll now refer to as guest operating systems, were wrapped a container of sorts called a virtual machine.
Each virtual machine was an unmodified operating system, such as Windows or Linux, with a virtual allocation of resources such as CPU, memory and local disk space.
Virtual machines also had devices mapped into them, such as network cards, graphics cards and other local devices such as storage.
The guest operating systems believed these to be real.
They had drivers installed, just like physical devices, but they weren't real hardware.
They were all emulated, fake information provided by the hypervisor to make the guest operating systems believe that they were real.
The crucial thing to understand about emulator virtualization is that the guest operating systems still believe that they were running on real hardware, and so they still attempt to make privileged calls.
They tried to take control of the CPU, they tried to directly read and write to what they think of as their memory and their disk, which are actually not real, they're just areas of physical memory and disk that have been allocated to them by the hypervisor.
Without special arrangements, the system would at best crash, and at worst, all of the guests would be overriding each other's memory and disk areas.
So the hypervisor, it performs a process known as binary translation.
Any privileged operations which the guests attempt to make, they're intercepted and translated on the fly in software by the hypervisor.
Now, the binary translation in software is the key part of this.
It means that the guest operating systems need no modification, but it's really, really slow.
It can actually halve the speed of the guest operating systems or even worse.
Emulated virtualization was a cool set of features for its time, but it never achieved widespread adoption for demanding workloads because of this performance penalty.
But there was another way that virtualization was initially handled, and this is called para-virtualization.
With para-virtualization, the guest operating systems are still running in the same virtual machine containers with virtual resources allocated to them, but instead of the slow binary translation which is done by the hypervisor, another approach is used.
Para-virtualization only works on a small subset of operating systems, operating systems which can be modified.
Because with para-virtualization, there are areas of the guest operating systems which attempt to make privileged calls, and these are modified.
They're modified to make them user calls, but instead of directly calling on the hardware, they're calls to the hypervisor called hypercalls.
So areas of the operating systems which would traditionally make privileged calls directly to the hardware, they're actually modified.
So the source code of the operating system is modified to call the hypervisor rather than the hardware.
So the operating systems now need to be modified specifically for the particular hypervisor that's in use.
It's no longer just generic virtualization, the operating systems are modified for the particular vendor performing this para-virtualization.
By modifying the operating system this way, and using para-virtual drivers in the operating system for network cards and storage, it means that the operating system became almost virtualization aware, and this massively improved performance.
But it was still a set of software processors designed to trick the operating system and/or the hardware into believing that nothing had changed.
The major improvement in virtualization came when the physical hardware started to become virtualization aware.
This allows for hardware virtualization, also known as hardware assisted virtualization.
With hardware assisted virtualization, hardware itself has become virtualization aware.
The CPU contains specific instructions and capabilities so that the hypervisor can directly control and configure this support, so the CPU itself is aware that it's performing virtualization.
Essentially, the CPU knows that virtualization exists.
What this means is that when guest operating systems attempt to run any privileged instructions, they're trapped by the CPU, which knows to expect them from these guest operating systems, so the system as a whole doesn't halt.
But these instructions can't be executed as is because the guest operating system still thinks that it's running directly on the hardware, and so they're redirected to the hypervisor by the hardware.
The hypervisor handles how these are executed.
And this means very little performance degradation over running the operating system directly on the hardware.
The problem, though, is while this method does help a lot, what actually matters about a virtual machine tends to be the input/output operation, so network transfer and disk I/O.
The virtual machines, they have what they think is physical hardware, for example, a network card.
But these cards are just logical devices using a driver, which actually connect back to a single physical piece of hardware which sits in the host.
The hardware, everything is running on.
Unless you have a physical network card per virtual machine, there's always going to be some level of software getting in the way, and when you're performing highly transactional activities such as network I/O or disk I/O, this really impacts performance, and it consumes a lot of CPU cycles on the host.
The final iteration that I want to talk about is where the hardware devices themselves become virtualization aware, such as network cards.
This process is called S-R-I-O-V, single root I/O virtualization.
Now, I could talk about this process for hours about exactly what it does and how it works, because it's a very complex and feature-rich set of standards.
But at a very high level, it allows a network card or any other add-on card to present itself, not just one single card, but almost a several mini-cards.
Because this is supported in hardware, these are fully unique cards, as far as the hardware is concerned, and these are directly presented to the guest operating system as real cards dedicated for its use.
And this means no translation has to happen by the hypervisor.
The guest operating system can directly use its card whenever it wants.
Now, the physical card which supports S-R-I-O-V, it handles this process end-to-end.
It makes sure that when the guest operating system is used, there are logical mini-network cards that they have physical access to the physical network connection when required.
In EC2, this feature is called enhanced networking, and it means that the network performance is massively improved.
It means faster speeds.
It means lower latency.
And more importantly, it means consistent lower latency, even at high loads.
It means less CPU usage for the host CPU, even when all of the guest operating systems are consuming high amounts of consistent I/O.
Many of the features that you'll see EC2 using are actually based on AWS implementing some of the more advanced virtualization techniques that have been developed across the industry.
AWS do have their own hypervisor stack now called Nitro, and I'll be talking about that in much more detail in an upcoming lesson, because that's what enables a lot of the higher-end EC2 features.
But that's all the theory I wanted to cover.
I just wanted to introduce virtualization at a high level and get you to the point where you understand what S-R-I-O-V is, because S-R-I-O-V is used for enhanced networking right now, but it's also a feature that can be used outside of just network cards.
It can help hardware manufacturers design cards, which, whilst they're a physical single card, can be split up into logical cards that can be presented to guest operating systems.
It essentially makes any hardware virtualization aware, and any of the advanced EC2 features that you'll come across within this course will be taking advantage of S-R-I-O-V.
At this point, though, we've completed all of the theory I wanted to cover, so go ahead, complete the slicing when you're ready.
You can join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
So focusing specifically on the animals for life scenario.
So what we're going to do in the upcoming demo lesson, to implement a truly resilient architecture for net services in a VPC, you need a net gateway in a public subnet inside each availability zone that the VPC uses.
So just like on the diagram that you've gone through now.
And then as a minimum, you need private route tables in each availability zone.
In this example, AZA, AZB, and then AZC.
Each of these would need to have their own route table, which would have a default IP version for route, which points at the net gateway in the same availability zone.
That way, if any availability zone fails, the others could continue operating without issues.
Now, this is important.
I've seen it in a few of some questions.
Where it suggests that one net gateway is enough, that a net gateway is truly regionally resilient.
This is false.
A net gateway is highly available in the availability zone that it's in.
So if hardware fails or it needs to scale to cope with load, it can do so in that AZ.
But if the whole AZ fails, there is no failover.
You provision a net gateway into a specific availability zone, not the region.
It's not like the internet gateway, which by default is region resilient.
For a net gateway, you have to deploy one into each AZ that you use if you need that region resilience.
Now, my apologies in advance for the small text.
It's far easier to have this all on screen at once.
I mentioned at the start of the lesson that net used to be provided by net instances, and these are just for the net process running on an EC2 instance.
Now, I don't expect this to feature on the exam at this point.
But if you ever need to use a net instance, by default, EC2 filters all traffic that it sends or receives.
It essentially drops any data that is on its network card when that network card is not either the source or the destination.
So if an instance is running as a net instance, then it will be receiving some data which the source address will be of other resources in that VPC.
And the destination will be a host on the internet.
So it will neither be the source nor the destination.
So by default, that traffic will be dropped.
And if you need to allow an EC2 instance to function as a net instance, then you need to disable a feature called source and destination checks.
This can be disabled via the console UI, the CLI, or the API.
The only reason I mention this is I have seen this question in the exam before, and if you do implement this in a real-world production-style scenario, you need to be aware that this feature exists.
I don't want you wasting your time trying to diagnose this feature.
So if you just right-click on an instance in the console, you'll be able to see an option to disable source and destination checks.
And that is required if you want to use an EC2 instance as a net instance.
Now, at the highest level, architecturally, net instances and net dayways are kind of the same.
They both need a public ID address.
They both need to run in a public subnet, and they both need a functional internet gateway.
But at this point, it's not really preferred to use EC2 running as a net instance.
It's much easier to use a net gateway, and it's recommended by AWS in most situations.
But there are a few key scenarios where you might want to consider using an EC2-based net instance.
So let's just step through some of the criteria that you might be looking at when deploying net services.
If you value availability, bandwidth, low levels of maintenance, and high performance, then you should use net gateways.
That goes for both real-world production usage, as well as being default for answering any exam questions.
A net gateway offers high-end performance, its scales, its custom design, perform network address translation.
A net instance in comparison is limited by the capabilities of the instances running on, and that instance is also general purpose, so it won't offer the same level of custom design performance as a net gateway.
Now, availability is another important consideration, and that instance is a single EC2 instance running inside an availability zone.
It will fail if the EC2 hardware fails.
It will fail if its storage fails or if its network fails, and it will fail if the AZ itself fails entirely.
A net gateway has some benefits over a net instance.
So inside one availability zone, it's highly available, so it can automatically recover, it can automatically scale.
So it removes almost all of the risks of outage versus a net instance.
But it will still fail entirely if the AZ fails entirely.
You still need provision, multiple net gateways, spread across all the AZs that you intend to use, if you want to ensure complete availability.
For maximum availability, a net gateway in every AZ you use.
This is critical to remember for the exam.
Now, if cost is your primary choice, if you're a financially challenged business, or if the VPC that you're deploying net services into is just a test VPC or something that's incredibly low volume, then a net instance can be cheaper.
It can also be significantly cheaper at high volumes of data.
You've got a couple of options.
You can use a very small EC2 instance, even ones that are free tier eligible to reduce costs, and the instances can also be fixed in size, meaning they offer predictable costs.
A net gateway will scale automatically, and you'll build for both the net gateway and the amount of data transferred, which increases as the gateway scales.
A net gateway is also not free tier eligible.
Now, this is really important because when we deploy these in the next demo lesson, it's one of those services that I need to warn you will come at a cost, so you need to be aware of that fact.
You will be charged for a net gateway regardless of how small the usage.
Net instances also offer other niche advantages because they're just EC2 instances.
You can connect to them just like you would any other EC2 instance.
You can multi-purpose them so you can use them for other things, such as passing hosts.
You can also use them for port forwarding, so you can have the port on the instance externally that could be connected to over the public internet, and have this forwarded-on for an instance inside the VPC.
Maybe port 8 if a web, or port 443 for secure web.
You can be completely flexible when you use net instances.
With a net gateway, this isn't possible because you don't have access to manage it.
It's a managed service.
Now, this comes up all the time in the exam, so try and get it really clear in your memory, and that gateway cannot be used as a passing host.
It cannot do port forwarding because you cannot connect to its operating system.
Now, finally, this is again one focus on the exam.
Net instances are just EC2 instances, so you can filter traffic using the network ACLs on the subnet instances in, or security groups directly associated with that instance.
Net gateways don't support security groups.
You can only use knuckles with net gateways.
This one comes up all the time in the exam, so it's worth noting down and maybe making a flashcard with.
Now, a few more things before we finish up.
What about IP version 6?
The focus of net is to allow private IP version 4 addresses to be used to connect in an outgoing only way to the AWS public zone and public internet.
Inside AWS, all IP version 6 addresses are publicly routable, so this means that you do not require net when using IP version 6.
The internet gateway works directly with IP version 6 addresses, so if you choose to make an instance in a private subnet, have a default IP version 6 route to the internet gateway, it will become a public instance.
As long as you don't have any knuckles or any security groups, any IP version 6 IP address in AWS can communicate directly with the AWS public zone and the public internet.
So the internet gateway can work directly with IP version 6.
Net gateways do not work with IP version 6, they're not required and they don't function with IP version 6.
So for the exam, if you see any questions which mention IP version 6 and net gateways, you can exclude the answer.
Net gateways do not work with IP version 6 and you can repeat it because I really wanted to stick in your memory.
So with any subnet inside AWS, which has been configured for IP version 6, if you add the IP version 6 default route, which is colon colon 4 slash 0, if you add that route and you point that route at the internet gateway as a target, that will give that instance bi-directional connectivity to the public internet and it will allow it to reach the AWS public zone and public services.
One service that we'll be talking about later on in the course when I cover more advanced features of VPC is a different type of gateway, known as an egress-only internet gateway.
This is a specific type of internet gateway that works only with IP version 6 and you use it when you want to give an IP version 6 instance outgoing only access to the public internet and the AWS public zone.
So don't worry, we'll be covering that later in the course, but I want to get it really burned into your memory that you do not use net and you do not use net gateways with IP version 6.
It will not work.
Now to get you some experience of using net gateways, it's time for a demo.
In the demo lesson, I'm going to be stepping you through what you need to do to provision a completely resilient net gateway architecture.
So that's using net gateway in each availability zone as well as configuring the routing required to make it work.
It's going to be one of the final pieces to our multi-tier VPC and it will allow private instances to have full outgoing internet access.
Now I can't wait for us to complete this together.
It's going to be a really interesting demo, one that will be really useful if you're doing this in the real world or if you have to answer exam questions related to net or net gateway.
So go ahead, complete the video and when you're ready, join me in the demo.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I'll be talking about Network Address Translation, or NAT, a process of giving a private resource outgoing only access to the internet.
And a NAT gateway is the AWS implementation that's available within WPC.
There's quite a bit of theory to cover, so let's get started.
So what is NAT?
Well, it stands for Network Address Translation.
This is one of those terms which means more than people think that it does.
In a strict sense, it's a set of different processes which can adjust ID packets by changing their source or destination IP addresses.
Now, you've seen a form of this already.
The internet gateway actually performs a type of NAT known as static NAT.
It's how a resource can be allocated with a public IP version for address, and then when the packets of data leave those resources and pass through the internet gateway, it adjusts the source IP address on the packet from the private address to the public, and then sends the packet on, and then when the packet returns, it adjusts the destination address from the public IP address to the original private address.
That's called static NAT, and that's how the internet gateway implements public IP version for addressing.
Now, what most people think of when they think of NAT is a subset of NAT called IP Masquerading.
And IP Masquerading hides a whole private side IP block behind a single public IP.
So rather than the one private IP to one public IP process that the internet gateway does, NAT is many private IPs to one single IP.
And this technique is popular because IP version 4 addresses are running out.
The public address space is rapidly becoming exhausted.
IP Masquerading, or what we'll refer to for the rest of this lesson as NAT, gives a whole private range of IP addresses outgoing only access to the public internet and the AWS public zone.
I've highlighted outgoing because that's the most important part, because many private IPs use a single public IP.
Incoming access doesn't work.
Private devices that use NAT can initiate outgoing connections to internet or AWS public space services, and those connections can receive response data, but you cannot initiate connections from the public internet to these private IP addresses when NAT is used.
It doesn't work that way.
Now, AWS has two ways that it can provide NAT services.
Historically, you could use an EC2 instance configured to provide NAT, but it's also a managed service, the NAT gateway, which you can provision in the VPC to provide the same functionality.
So let's look at how this works architecturally.
This is a simplified version of the Animals for Life architecture that we've been using so far.
On the left is an application tier subnet in blue, and it's using the IP range 10.16.32.0/20.
So this is a private only subnet.
Inside it are three instances, I01, which is using the IP 10.16.32.10, I02, which is using 32.20, and I03, which is using 32.30.
These IP addresses are private, so they're not publicly routable.
They cannot communicate with the public internet or the AWS public zone services.
These addresses cannot be routed across a public style network.
Now, if we wanted this to be allowed, if we wanted these instances to perform certain activities using public networking, for example, software updates, how would we do it?
Well, we could make the subnet's public in the same way that we've done with the public subnets or the web subnets, but we might not want to do that architecturally.
With this multi-tier architecture that we're implementing together, part of the design logic is to have tiers which aren't public and aren't accessible from the public internet.
Now, we could also host some kind of software update server inside the VPC, and some businesses choose to do that.
Some businesses run Windows update services, all Linux update services inside their private network, but that comes with an admin overhead.
NAT offers us a third option, and it works really well in this style of situation.
We provision a NAT gateway into a public subnet, and remember, the public subnet allows us to use public IP addresses.
The public subnet has a route table attached to it, which provides default IP version 4 routes pointing at the internet gateway.
So, because the NAT gateway is located in this public web subnet, it has a public IP which is routable across the public internet, so it's now able to send data out and get data back in return.
Now, the private subnet where the instances are located can also have its own route table, and this route table can be different than the public subnet route table.
So, we could configure it so that the route table that's on the application subnet has a default IP version 4 route, but this time, instead of pointing at the internet gateway, like the web subnet users, we configure this private route table so that it points at the NAT gateway.
This means when those instances are sending any data to any IP addresses that do not belong inside the VPC, by default, this default route will be used, and that traffic will get sent to the NAT gateway.
So, let's have a look at how this packet flow works.
Let's simulate the flow packets from one of the private instances and see what the NAT gateway actually does.
So, first, instance 1 generates some data.
Let's assume that it's looking for software updates.
So, this packet has a source IP address of instance 1's private IP and a destination of 1.3.3.7.
For this example, let's assume that that's a software update server.
Now, because we have this default route on the route table of the application subnet, that packet is routed through to the NAT gateway.
The NAT gateway makes a record of the data packet.
It stores the destination that the packet is for, the source address of the instance sending it, and other details which help it identify the specific communication in future.
Remember, multiple instances can be communicating at once, and for each instance, it could be having multiple conversations with different public internet hosts.
So, the NAT gateway needs to be able to uniquely identify those.
So, it records the IP addresses involved, the source and destination, the port numbers, everything it needs, into a translation table.
So, the NAT gateway maintains something called a translation table which records all of this information.
And then, it adjusts the packet to the one that's been sent by the instance, and it changes the source address of this IP packet to be its own source address.
Now, if this NAT appliance were anywhere for AWS, what it would do right now is adjust the packet with a public routable address. - Hi. - Let's do this directly.
But remember, all the inside of the IPC really has directly attached to it a public IP version 4 address.
That's what the internet gateway does.
So, the NAT gateway, because it's in the web subnet, it has a default route, and this default route points at the internet gateway.
And so, the packet is moved from the NAT gateway to the internet gateway by the IPC router.
At this point, the internet gateway knows that this packet is from the NAT gateway.
It knows that the NAT gateway has a public IP version 4 address associated with it, and so, it modifies the packet to have a source address of the NAT gateway's public address, and it sends it on its way.
The NAT gateway's job is to allow multiple private IP addresses to masquerade behind the IP address that it has.
That's where the term IP masquerading comes from.
That's why it's more accurate.
So, the NAT gateway takes all of the incoming packets from all of the instances that it's managing, and it records all the information about the communication.
It takes those packets, it changes the source address from being those instances to its own IP address, its own external-facing IP address.
If it was outside AWS, this would be a public address directly.
That's how your internet router works for your home network.
All of the devices internally on your network talk out using one external IP address, your home router uses NAT.
But because it's in AWS, it doesn't have directly attached a real public IP.
The internet gateway translates from its IP address to the associated public one.
So, that's how the flow works.
If you need to give an instance its own public IP version for address, then only the internet gateway is required.
If you want to give private instances outgoing access to the internet and the AWS public services such as S3, then you need both the NAT gateway to do this many-to-one translation and the internet gateway to translate from the IP of the NAT gateway to a real public IP version for address.
Now, let's quickly run through some of the key facts for the NAT gateway product that you'll be implementing in the next demo lesson.
First, and I hope this is logical for you by now, it needs to run from a public subnet because it needs to be able to be assigned a public IP version for IP address for itself.
So, to deploy a NAT gateway, you already need your VPC in a position where it has public subnets.
And for that, you need an internet gateway, subnets configured to allocate public IP version for addresses and default routes for those subnets pointing at the internet gateway.
Now, a NAT gateway actually uses a special type of public IP version for address that we haven't covered yet called an elastic IP.
For now, just know that these are IP version for addresses, which is static.
They don't change.
These IP addresses are allocated to your account in a region and they can be used for whatever you want until you reallocate them.
And NAT gateways use these elastic IPs, the one service which utilizes elastic IPs.
Now, they're talking about elastic IPs later on in the course.
Now, NAT gateways are an AZ resilient service.
If you read the AWS documentation, you might get the impression that they're fully resilient in a region like an internet gateway.
They're not, they're resilient in the AZ that they're in.
So they can recover from hardware failure inside an AZ.
But if an AZ entirely fails, then the NAT gateway will also fail.
For a fully region resilient service, so to mirror the high availability provided by an internet gateway, then you need to deploy one NAT gateway in each AZ that you're using in the VPC and then have a route table for private subnets in that availability zone, pointing at the NAT gateway also in that availability zone.
So for every availability zone that you use, you need one NAT gateway and one route table pointing at that NAT gateway.
Now, they aren't super expensive, but it can get costly if you have lots of availability zones, which is why it's important to always think about your VPC design.
Now, NAT gateways are a managed service.
You deploy them and AWS handle everything else.
They can scale to 45 gigabits per second in bandwidth and you can always deploy multiple NAT gateways and split your subnets across multiple provision products.
If you need more bandwidth, you can just deploy more NAT gateways.
For example, you could split heavy consumers across two different subnets in the same AZ, have two NAT gateways in the same AZ and just route each of those subnets to a different NAT gateway and that would quickly allow you to double your available bandwidth.
With NAT gateways, you'll build based on the number that you have.
So there's a standard hourly charge for running a NAT gateway and this is obviously subject to change in a different region, but it's currently about four cents per hour.
And note, this is actually an hourly charge.
So partial hours are billed as full hours.
And there's also a data processing charge.
So that's the same amount as the hourly charge around four cents currently per gigabyte of processed data.
So you've got this base charge that a NAT gateway consumes while running plus a charge based on the amount of data that you process.
So keep both of those things in mind for any NAT gateway related questions in the exam.
Don't focus on the actual values, just focus on the fact they have two charging elements.
Okay, so this is the end of part one of this lesson.
It's getting a little bit on the long side, and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead, complete the video, and when you're ready, join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson I want to talk in detail about security groups within AWS.
These are the second type of security filtering feature commonly used within AWS, the other type being network access control lists which we've previously discussed.
So security groups and knuckles share many broad concepts but the way they operate is very different and it's essential that you understand those differences and the features offered by security groups for both the exam and real-world usage.
So let's just jump in and get started.
In the lesson on network access control lists I explained that they're stateless and by now you know what stateless and stateful mean.
Security groups are stateful, they detect response traffic automatically for a given request and this means that if you allow an inbound or outbound request then the response is automatically allowed.
You don't have to worry about configuring ephemeral ports, it's all handled by the product.
If you have a web server operating on TCP 443 and you want to allow access from the public internet then you'll add an inbound security group rule allowing inbound traffic on TCP 443 and the response which is using ephemeral ports is automatically allowed.
Now security groups do have a major limitation and that's that there is no explicit deny.
You can use them to allow traffic or you can use them to not allow traffic and this is known as an implicit deny.
So if you don't explicitly allow traffic then you're implicitly denying it but you can't and this is important you're unable to explicitly deny traffic using security groups and this means that they can't be used to block specific bad actors.
Imagine you allow all source IP addresses to connect to an instance on port 443 but then you discover a single bad actor is attempting to exploit your web server.
Well you can't use security groups to block that one specific IP address or that one specific range.
If you allow an IP or if you allow an IP range or even if you allow all IP addresses then security groups cannot be used to deny a subset of those and that's why typically you'll use network access control lists in conjunction with security groups where the knuckles are used to add explicit denies.
Now security groups operate above knuckles on the OSI7 layer stack which means that they have more features.
They support IP and side-based rules but they also allow referencing AWS logical resources.
This includes all the security groups and even itself within rules.
I'll be covering exactly how this works on the next few screens.
Just know at this stage that it enables some really advanced functionality.
An important thing to understand is that security groups are not attached to instances nor are they attached to subnet.
They're actually attached to specific elastic network interfaces known as ENIs.
Now even if you see the user interface present this as being able to attach a security group to an instance know that this isn't what happens.
When you attach a security group to an instance what it's actually doing is attaching the security group to the primary network interface of that instance.
So remember security groups are attached to network interfaces that's an important one to remember for the exam.
Now at this point let's step through some of the unique features of security groups and it's probably better to do this visually.
Let's start with a public subnet containing an easy to instance and this instance has an attached primary elastic network interface.
On the right side we have a customer Bob and Bob is accessing the instance using HDTBS so this means TCP but 443.
Conceptually think of security groups as something which surrounds network interfaces in this case the primary interface of the EC2 instance.
Now this is how a typical security group might look.
It has inbound and outbound rules just like a network ACL and this particular example is showing the inbound rules allowing TCP port 443 to connect from any source.
The security group applies to all traffic which enters or leaves the network interface and because they're stateful in this particular case because we've allowed TCP port 443 as the request portion of the communication the corresponding response part the connection from the instance back to Bob is automatically allowed.
Now lastly I'm going to repeat this point several times throughout this lesson.
Security groups cannot explicitly block traffic.
This means with this example if you're allowing 0.0.0.0.0 to access the instance on port TCP port 443 and this means the whole IP version for internet then you can't block anything specific.
Imagine Bob is actually a bad actor.
Well in this situation security groups cannot be used to add protection.
You can't add an explicit deny for Bob's IP address.
That's not something that security groups are capable of.
Okay so that's the basics.
Now let's look at some of the advanced bits of security group functionality.
Security groups are capable of using logical references.
Let's step through how this works with a similar example to the one you just saw.
We start with a VPC containing a public web subnet and a private application subnet.
Inside the web subnet is the Categoram application web instance and inside the app subnet is the back-end application instance.
Both of these are protected by security groups.
We have A4L-web and A4L-app.
Traffic wise we have Bob accessing the web instance over port TCP port 443 and because this is the entry point to the application which logically has other users than just Bob we're allowing TCP port 443 from any IP version for address and this means we have a security group with an inbound rule set which looks like this.
In addition to this front-end traffic the web instance also needs to connect with the application instance and for this example let's say this is using TCP port 1337.
Our application is that good.
So how best to allow this communication?
Well we could just add the IP address of the web instance into the security group of the application instance or if you wanted to allow our application to scale and change IPs then we could add the side arrangers of the subnets instead of IP addresses.
So that's possible but it's not taking advantage of the extra functionality which security groups provide.
What we could do is reference the web security group within the application security group.
So this is an example of the application security group.
Notice that it allows TCP port 1337 inbound but it references as the source a logical resource the security group.
Now using a logical resource reference in this way means that the source reference of the A4L-web security group this actually references anything which has this security group associated with it.
So in this example any instances which have the A4L-web security group attached to them can connect to any instances which have the A4L-web security group attached to them using TCP port 1337.
So in essence this references this.
So this logical reference within the application security group references the web security group and anything which has the web security group attached to it.
Now this means we don't have to worry about IP addresses or side arrangers and it also has another benefit.
It scales really well.
So as additional instances are added to the application subnet and web subnet and as those instances are attached to the relevant security groups they're impacted by this logical referencing allowing anything defined within the security group to apply to any new instances automatically.
Now this is critical to understand so when you reference a security group from another security group what you're actually doing is referencing any resources which have that security group associated with them.
So this substantially reduces the admin overhead when you have multi-tiered applications and it also simplifies security management which means it's prone to less errors.
Now logical references provide even more functionality.
They allow self referencing.
Let's take this as an example a private subnet inside AWS with an ever-changing number of application instances.
Right now it's three but it might be three, thirty or one.
What we can do is create a security group like this.
This one allows incoming communications on port TCP 1337 from the web security group but it also has this rule which is a self-referential rule allowing all traffic.
What this means is that if it's attached to all of the instances then anything with this security group attached can receive communication so all traffic from this security group and this effectively means anything that also has this security group attached to it.
So it allows communications to occur to instances which have it attached from instances which have it attached.
It handles any IP changes automatically which is useful in these instances within an auto scaling group which is provisioning and terminating instances based on load on the system.
It also allows for simplified management of any intra-app communications.
An example of this might be Microsoft the main controllers or managing application high availability within clusters.
So this is everything I wanted to cover about security groups within AWS.
So there's a lot of functionality and intelligence that you gain by using security groups versus network ACLs but it's important that you understand that while network ACLs do allow you to explicitly deny traffic security groups don't and so generally you would use network ACLs to explicitly block any bad actors and use security groups to allow traffic to your VPC based resources.
You do this because security groups are capable of this logical resource referencing and that means AWS logical resources in security groups or even itself to allow this free flow of communications within a security group.
At this point that is everything I wanted to cover in this lesson so go ahead and complete the video and when you're ready I'll look forward to you joining me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and by now you should understand the difference between stateless and stateful security protection.
In this lesson I want to talk about one security feature of AWS VPCs and a little bit more depth and that's network access control lists known as knuckles.
Now we do have a lot to cover so let's jump in and get started.
A network access control list we thought of as a traditional firewall available within AWS VPCs so let's look at a visual example.
A subnet within an AWS VPC which has two EC2 instances A and B.
The first thing to understand and this is core to how knuckles work within AWS is that they are associated with subnets.
Every subnet has an associated network ACL and this filters data as it crosses the boundary of that subnet.
In practice this means any data coming into the subnet is affected and data leaving the subnet is affected.
But and this is super important to remember connections between things within that subnet such as between instance A and instance B in this example are not affected by network ACLs.
Each network ACL contains a number of rules, two sets of rules to be precise.
We have inbound rules and outbound rules.
Now inbound rules only affect data entering the subnet and outbound rules affect data leaving the subnet.
Remember from the previous lesson this isn't always matching directly to request and response.
A request can be either inbound or outbound as can a response.
These inbound and outbound rules are focused only on the direction of traffic not whether it's request or response.
In fact and I'll cover this very soon knuckles are stateless which means they don't know if traffic is request or response.
It's all about direction.
Now rules match the destination IP or IP range, destination port or port range together with the protocol and they can explicitly allow or explicitly deny traffic.
Remember this one network ACLs offer both explicit allows and explicit denies.
Now rules are processed in order.
First a network ACL determines if the inbound or outbound rules apply.
Then it starts from the lowest rule number.
It evaluates traffic against each individual rule until it finds a match.
Then that traffic is either allowed or denied based on that rule and then processing stops.
Now this is critical to understand because it means that if you have a deny rule and an allow rule which match the same traffic but if the deny rule comes first then the allow rule might never be processed.
Lastly there's a catch all showed by the asterisk in the rule number and this is an implicit deny.
If nothing else matches then traffic will be denied.
So those are the basics.
Next let's move on to some more complex elements of network ACLs.
Now I just mentioned that network ACLs are stateless and this means that rules are required for both the request and the response part of every communication.
You need individual rules for those so one inbound and one outbound.
Take this example a multi-tiered application running in a VPC.
We've got a web server in the middle and an application server on the left.
On the right we have a user Bob using a laptop and he's accessing the website.
So he makes a connection using HTTPS which is TCP port 443 and this is the request as you know by now and this is also going to mean that a response is required using the ephemeral port range.
This ephemeral port is chosen at random from the available range decided by the operating system on Bob's laptop.
Now to allow for this initial communication if we're using network ACLs then we'll need to have one associated with the web subnet and it will need rules in the inbound and outbound sections of that network ACL.
Notice how on the inbound rule set we have rule number 110 which allows connections from anywhere and this is signified by 0.0.0.0 through this network ACL and this is allowed as long as it's using TCP port 443.
So this is what allows the request from Bob into the web server.
We also have on the outbound rule set rule number 120 and this allows outbound traffic to anywhere again 0.0.0.0 as long as the protocol is TCP using the port range of 1.0.2.4 to 65.5.3.5 and this is the ephemeral port range which I mentioned in the previous lesson.
Now this is not amazingly secure but with stateless firewalls this is the only way.
Now we also have the implicit denies and this is denoted by the rules with the star in the rule number and this means that anything which doesn't match rule 110 or 120 is denied.
Now it's also worth mentioning while I do have rule 110 and 120 number differently the rule numbers are unique on inbound and outbound so we could have the single rule number 110 on both rule sets and that would be okay.
It's just easier to illustrate this if I use unique rule numbers for each of the different rule sets.
Now let's move on and increase the complexity little.
So we have the same architecture we have Bob on the right, the web subnet in the middle and the application subnet on the left.
You know now that because network ACLs are stateless each communication requires one request rule and one response rule.
This becomes more complex when you have a multi-tiered architecture which operates across multiple subnets and let's step through this to illustrate why.
Let's say the pop initiates a connection to the web server we know about this already because I just covered it.
If we have a network ACL around the web subnet we'll need an inbound rule on the web network ACL.
There's also going to be response traffic so this is going to use the ephemeral port range and this is going to need an outbound rule on that same web network ACL so this should make sense so far.
But also the web server might need to communicate with the app server using some application TCP port.
Now this is actually crossing two subnet boundaries the web at subnet boundary and the application subnet boundary so it's going to need an outbound rule on the web at subnet knuckle and also an inbound rule on the application subnet knuckle.
Then we have the response for that as well from the app server through to the web server and this is going to be using ephemeral ports but this also crosses two subnet boundaries it leaves the application subnet which will need an outbound rule on that knuckle and it enters the web subnet which will also need an inbound rule on that network ACL and what if each of those servers need software updates it will get even more complex really quickly.
You always have to be aware of these rule pairs the application port request and the ephemeral response for every single communication in some cases you're going to have multi-tier architecture and this might mean the communications go through different subnets.
If you need software updates this will need more if you use network address translation or NAT you might need more rules still.
You'll need to worry about this if you use network ACLs within a vpc for traffic to a vpc or traffic from a vpc or traffic between subnets inside that vpc.
When a vpc is created it's created with a default network ACL and this contains inbound and outbound rule sets which have the default implicit deny but also a capsule allow and this means that the net effect is that all traffic is allowed so the default within a vpc is that knuckles have no effect they aren't used this is designed [Music] I need to be beginner friendly and reduce admin overhead.
AWS prefer using security groups which I'll be covering soon.
If you create your own custom network ACLs though that's a different story.
Custom knuckles are created for a specific vpc and initially they're associated with no subnets.
They only have one rule on both the inbound and outbound rule sets which is the default deny and the result is that if you associate this custom network ACL with any subnets all traffic will be denied so be careful with this it's radically different behavior than the default network ACL created with a vpc.
Now this point I just want to cover some finishing key points which you need to be aware of for any real-world usage and when you're answering exam questions.
So network access controlists remember they're known as knuckles they are stateless so they view request and response as different things so you need to add rules both for the request and for the response.
A knuckle only affects data which is crossing the subnet boundary so communications between instances in the same subnet is not affected by a network ACL on that subnet.
Now this can mean that if you do have data crossing between subnets then you need to make sure that each knuckle on both of those subnets has the appropriate inbound and outbound rules so you end up with a situation where one connection can in theory need two rules on each knuckle if that connection is crossing two different subnet boundaries.
Now knuckles are able to explicitly allow traffic and explicitly deny and the deny is important because as you'll see when I talk about security groups this is a capability that you need to network ACLs.
So network ACLs allow you to block specific IPs or specific IP ranges which are associated with bad actors so they're a really good security feature when you need to block any traffic attempting to exploit your systems.
Now network ACLs are not aware of any logical resources they only allow you to use IPs and cyber ranges ports and protocols you cannot reference logical resources within AWS and knuckles can also not be assigned two logical resources they're only assigned to subnets within VPCs within AWS.
Now knuckles are very often used together with security groups such as mentioned to add the capability to explicitly deny bad IPs or bad networks so generally you would use security groups to allow traffic and you use knuckles to deny traffic and I'll talk about exactly how this works in the next lesson.
Now each subnet within a VPC has one knuckle associated with it it's either going to be the default network ACL for that VPC or a custom one which you create and associate.
A single knuckle though can be associated with many different subnets so while a subnet can only have one network ACL one network ACL can be associated with many different subnets.
Now this point that is everything that I wanted to cover about network ACLs for this lesson so go ahead complete the video and when you're ready I'll look forward to you joining me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to cover the differences between stateful and stateless firewalls.
And to do that I need to refresh your knowledge of how TCP and IP function.
So let's just jump in and get started.
In the networking fundamentals videos I talk about how TCP and IP worked together.
You might already know this if you have networking experience in the real world, but when you make a connection using TCP, what's actually happening is that each side is sending IP packets to each other.
These IP packets have a source and destination IP and are carried across local networks and the public internet.
Now TCP is a layer 4 protocol which runs on top of IP.
It adds error correction together with the idea of ports, so HTTP runs on TCP port 80 and HTTPS runs on TCP port 443 and so on.
So keep that in mind as we continue talking about the state of connections.
So let's say that we have a user here on the left Bob and he's connecting to the Categoram application running on a server on the right.
What most people imagine in this scenario is a single connection between Bob's laptop and the server.
So Bob's connecting to TCP port 443 on the server and in doing so he gets information back, in this case many different categories.
Now you know that below the surface at layer 3 this single connection is handled by exchanging packets between the source and the destination.
Conceptually though you can imagine that each connection, in this case it's an outgoing connection from Bob's laptop to the server.
Each one of these is actually made up of two different parts.
First we've got the request part where the client requests some information from a server, in this case from Categors, and then we have the response part where that data is returned to the client.
Now these are both parts of the same interaction between the client and server, but strictly speaking you can think of these as two different components.
What actually happens as part of this connection setup is this.
First the client picks a temporary port and this is known as an ephemeral port.
Now typically this port has a value between 1024 and 65535, but this range is dependent on the operating system which Bob's laptop is using.
Then once this ephemeral port is chosen the client initiates a connection to the server using a well-known port number.
Now a well-known port number is a port number which is typically associated with one specific popular application or protocol.
In this case TCP port 443 is HTTPS.
So this is the request part of the connection, it's a stream of data to the server.
You're asking for something, some cat pictures or a web page.
Next the server responds back with the actual data.
The server connects back to the source IP of the request part, in this case Bob's laptop, and it connects to the source port of the request part, which is the ephemeral port which Bob's laptop has chosen.
This part is known as the response.
So the request is from Bob's laptop using an ephemeral port to a server using a well-known port.
The response is from the server on that well-known port to Bob's laptop on the ephemeral port.
Now it's these values which uniquely identify a single connection.
So that's a source port and source IP.
And a destination IP and a destination port.
Now hope that this makes sense so far, if not then you need to repeat this first part of the video again because this is really important to understand.
If it does make sense then let's carry on.
Now let's look at this example in a little bit more detail.
This is the same connection that we looked at on the previous screen.
We have Bob's laptop on the left and the Catering Server on the right.
Obviously the left is the client and the right is the server.
I also introduced the correct terms on the previous screen so request and response.
So the first part is the client talking to the server asking for something and that's the request.
And the second part is the server responding and that's the response.
But what I want to get you used to is that the directionality depends on your perspective and let me explain what I mean.
So in this case the client initiates the request and I've added the IP addresses on here for both the client and the server.
So what this means is the packets will be sent from the client to the server and these will be flowing from left to right.
These packets are going to have a source IP address of 119.18.36.73, which is the IP address of the client.
So Bob's laptop and they will have a destination IP of 1.3.3.7, which is the IP address of the server.
Now the source port will be a temporary or ephemeral port chosen by the client and the destination port will be a well-known port.
In this case we're using HTTPS so TCP port 443.
Now if I challenge you to take a quick guess, would you say that this request is outbound or inbound?
If you had to pick, if you had to define a firewall rule right now, would you pick inbound or outbound?
Well this is actually a trick question because it's both.
From the client perspective this request is an outbound connection.
So if you're adding a firewall rule on the client you would be looking to allow or deny an outbound connection.
From the server perspective though it's an inbound connection so you have to think about perspective when you're working with firewalls.
But then we have the response part from the server through to the client.
This will also be a collection of packets moving from right to left.
This time the source IP on those packets will be 1.3.3.7, which is the IP address of the server.
The destination IP will be 119.18.36.73, which is the IP address of the client.
So Bob's laptop.
The source port will be TCP port 443, which is the well-known port of HTTPS and the destination port will be the ephemeral port chosen originally by the client.
Now again I want you to think about the directionality of this component of the communication.
Is it outbound or inbound?
Well again it depends on perspective.
The server sees it as an outbound connection from the server to the client and the client sees it as an inbound connection from the server to itself.
Now this is really important because there are two things to think about when dealing with firewall rules.
The first is that each connection between a client and a server has two components, the request and the response.
So the request is from a client to a server and the response is from a server to a client.
The response is always the inverse direction to the request.
But the direction of the request isn't always outbound and isn't always inbound.
It depends on what that data is together with your perspective.
And that's what I want to talk about a bit more on the next screen.
Let's look at this more complex example.
We still have Bob and his laptop on the CaterGram server, but now we have a software update server on the bottom left.
Now the CaterGram server is inside a subnet which is protected by a firewall.
And specifically this is a stateless firewall.
A stateless firewall means that it doesn't understand the state of connections.
What this means is that it sees the request connection from Bob's laptop to CaterGram and the response of from CaterGram to Bob's laptop as two individual parts.
You need to think about allowing or denying them as two parts.
You need two rules.
In this case one inbound rule which is the request and one outbound rule for the response.
This is obviously more management overhead.
Two rules needed for each thing.
Each thing which you as a human see as one connection.
But it gets slightly more confusing than that.
For connections to the CaterGram server, so for example when Bob's laptop is making a request, then that request is inbound to the CaterGram server.
The response logically enough is outbound, sending data back to Bob's laptop, which is possible to have the inverse.
Consider the situation where the CaterGram server is performing software updates.
Well in this situation the request will be from the CaterGram server to the software update server, so outbound, and the response will be from the software update server to the CaterGram server, so this is inbound.
So when you're thinking about this, start with the request.
Is the request coming to you or going to somewhere else?
The response will always be in the reverse direction.
So this situation also requires two firewall rules.
One outbound for the request and one inbound for the response.
Now there are two really important points I want to make about stateless firewalls.
First, for any servers where they accept connections and where they initiate connections, and this is common with web servers which need to accept connections from clients, but where they also need to do software updates.
In this situation you'll have to deal with two rules for each of these, and they will need to be the inverse of each other.
So get used to thinking that outbound rules can be both the request and the response, and inbound rules can also be the request and the response.
It's initially confusing, but just remember, start by determining the direction of the request, and then always keep in mind that with stateless firewalls you're going to need an inverse rule for the response.
Now the second important thing is that the request component is always going to be to a well-known port.
If you're managing the firewall for the category application, you'll need to allow connections to TCP port 443.
The response though is always from the server to a client, but this always uses a random ephemeral port, because the firewall is stateless, it has no way of knowing which specific port is used for the response, so you'll often have to allow the full range of ephemeral ports to any destination.
This makes security engineers uneasy, which is why stateless firewalls which I'll be talking about next are much better.
Just focus on these two key elements, that every connection has a request and a response, and together with those keep in mind the fact that they can both be in either direction, so a request can be inbound or outbound, and a response will always be the inverse to the directionality of the request.
Also you'll keep in mind that any rules that you create for the response will need to often allow the full range of ephemeral ports.
That's not a problem with stateless firewalls which I want to cover next.
So we're going to use the same architecture, we've got Bob's laptop on the top left, the category server on the middle right, and the software update server on the bottom left.
A stateless firewall is intelligent enough to identify the response for a given request, since the ports and IPs are the same, it can link one to the other, and this means that for a specific request to category from Bob's laptop to the server, the firewall automatically knows which data is the response, and the same is true for software updates, for a given connection to a software update server, the request, the firewall is smart enough to be able to see the response or the return data from the software update server back to the category server, and this means that with a stateful firewall, you'll generally only have to allow the request or not, and the response will be allowed or not automatically.
This significantly reduces the admin overhead and the chance for mistakes, because you just have to think in terms of the directionality and the IPs and ports of the request, and it handles everything else.
In addition, you don't need to allow the full ephemeral port range, because the firewall can identify which port is being used, and implicitly allow it based on it being the response to a request that you allow.
Okay, so that's how stateless and stateful firewalls work, and now it's been a little bit abstract, but this has been intentional, because I want you to understand how they work, and sexually, before I go into more detail with regards to how AWS implements both of these different security firewall standards.
Now at this point, I've finished with the abstract description, so go ahead and finish this video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to introduce the term "service models", specifically "cloud service models".
If you've ever heard or seen the term something as a service or x-a-a-s, then this is generally a cloud service model, and that's what I want to cover in this lesson.
Before I start, there are a few terms I'd like to introduce, which will make the rest of this lesson make more sense as well as being helpful throughout the course.
If you already know these, then that's fine.
It will just be a refresher.
But if these are new, then it's important to make sure you understand all these concepts because they're things that underpin a lot of what makes the cloud special.
Now, when you deploy an application anywhere, it uses what's known as an infrastructure stack.
An infrastructure stack is a collection of things, which that application needs, all stack on to each other.
Starting at the bottom, everything runs inside a facility, which is a building with power, with aircon, with physical security.
Everything uses infrastructure, so storage and networking.
An application generally requires one or more physical service.
These servers run virtualization, which allows them to be carved up into virtual machines.
These virtual machines run operating systems.
They could potentially run containers.
An example of this is Docker.
Don't worry if you don't know what these are, I'll be covering them later in the course.
Every application is written in a language such as Python, JavaScript, C, C++, C#, and all of these have an environment that they need to run in.
This is called a runtime environment.
An application needs data to work on, which it creates or consumes.
And then at the very top is the application itself.
Now, all of this together is an infrastructure stack or an application stack.
If you use Netflix or Office 365 or Slack or Google or your online bank or this very training site, it has parts in each of these tiers.
Even the application that you're using right now to watch this training is running on an operating system, which is running itself on a laptop, a PC or a tablet, which is just hardware.
The hardware uses infrastructure, your internet connection, and this runs in facilities, so your house or a coffee shop.
With any implementation of this stack, there are parts of the stack that you manage and there are parts of the stack which are managed by the vendor.
So if you're working in a coffee shop, they'll have specific people to manage the building and the internet connection, and that's probably not you.
But it's more likely that you are responsible for your laptop and the operating system running on top of it.
Now, this is true for any system.
Some parts you manage, some parts some people else manages.
You don't manage any part of Netflix, for example.
Netflix is an entity, manage everything end to end.
But if you do work in IT, then maybe you do manage all of the IT infrastructure stack or some parts of it.
The last term that I want to introduce is what's known as the unit of consumption.
It's what you pay for and it's what you consume.
It's the part of the system where from that point upwards in the infrastructure stack, you are responsible for management.
For example, if you procure a virtual server, then your unit of consumption is the virtual machine.
A virtual machine is just an operating system and an allocation of resources.
So your unit of consumption is the operating system.
In AWS, if you create a virtual machine known as an instance, then you consume the operating system.
If you use Netflix, though, then you consume the service and that's it.
You have no involvement in anything else.
The unit of consumption is what makes each service model different.
So let's take a look.
With an on-premise system, so that's one which is running in a building that your business owns, your business has to buy all parts of the stack.
It has to manage them all, pay for the upkeep and running costs of all of them.
And it has to manage the staff costs and risks associated with every single part of that stack.
Now, because it owns and controls everything, while it's expensive and it does carry some risks, it's also very flexible.
In theory, you can create systems which are tailor-made for your business.
Now, before cloud computing became as popular as it is now, it was possible to use something called data center hosting.
Now, this is similar to on-premises architectures, but when you use data center hosting, you place your equipment inside a building which is owned and managed by a vendor.
This meant that the facilities were owned and controlled by that vendor.
You as a business consumed space in that facility.
Your unit of consumption was a rack space.
If you rented three racks from a data center provider, they provided the building, the security, the power, the air conditioning and the staffing to ensure the service you paid for was provided.
All of the service models we use today are just evolutions of this type of model where more and more parts of the stack are handed off to a vendor.
Now, the cost change, the risks that are involved change and the amount of flexibility you have changed, but it's all the same infrastructure stack just with different parts being controlled by different entities.
So let's look at this further.
The first cloud service model that I want to talk about is infrastructure as a service or IaaS.
With this model, the provider manages the facilities, the storage and networking, the physical server and the virtualization and you consume the operating system.
Remember, a virtual machine is just an operating system with a certain amount of resources assigned.
It means that you still have to manage the operating system and anything above the operating system so any containers, the runtime, the data and your applications.
So why use IaaS as a service model?
With IaaS, you generally pay per second, per minute per hour fee for the virtual machine.
You pay that fee when you use that virtual machine and you don't pay when you don't use it.
The costs associated with managing a building, procuring and maintaining infrastructure and hardware and installing and maintaining a virtualization layer are huge and they're all managed by the vendor.
The vendor needs to purchase things in advance, pay licenses, pay staff to keep things running and manage the risks of data loss, hardware failure and a wealth of other things.
Using IaaS means that you can ignore all of those and let the vendor manage them.
IaaS is one of the most popular cloud service models.
Now, you do lose a little bit of flexibility because you can only consume the virtual machine sizes and capabilities that the provider allows, but there is a substantial cost reduction because of that.
In AWS, a product called Elastic Compute Cloud or EC2 uses the IaaS service model.
So in summary, IaaS is a great compromise.
You do lose a little bit in terms of flexibility, but there are substantial costs and risk reductions.
Okay, so let's move on.
Another popular service model is Platform as a Service or Pass.
Now, this service model is aimed more at developers who have an application they just want to run and not worry about any of the infrastructure.
With Pass, your unit of consumption is the runtime of the runtime environment.
So if you run a Python application, you pay for a Python runtime environment.
You give the vendor some data and your application and you put it inside this runtime environment and that's it.
You manage your application and its data and you consume the runtime environment, which effectively means that the provider manages everything else, containers, operating system, virtualization, service, infrastructure and facilities.
Now, let's review one final service model before we finish this lesson.
The final service model is Software as a Service or SaaS.
And with SaaS, you consume the application.
You have no exposure to anything else.
You pay a monthly fee for consuming the application.
You get it as a service.
Now, examples of SaaS products include Netflix, Dropbox, Office 365, Flickr, even Google Mail.
Businesses consume SaaS products because they are standard known services.
Email is email.
One email service is much like another.
And so a business can save significant infrastructure costs by consuming their email service as a SaaS solution.
They don't have much control of exactly how the email services can be configured, but there are almost no risks or additional costs associated with procuring a SaaS service.
IaaS, SaaS and SaaS are examples of cloud service models.
Now, there are others such as Function as a Service, known as SaaS, Container as a Service, Database as a Service or DBAAS, and there are many more.
For this lesson, the important points to understand are that the infrastructure stack exists in every service and application that you use.
The part of the stack is managed by you.
The part of the stack is managed by the provider.
And for every model, there is part of the stack which you consume, your unit of consumption.
That's the part that you pay for and generally the part that delineates between where the vendor manages and where you manage.
Now, again, I know this has been a fairly theory heavy lesson, but I promise you it will be invaluable as you go through the course.
Thanks for listening.
Go ahead, complete this video.
And when you're ready, join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
In this lesson, I want to cover theoretical topic which is really important to me personally, and something that I think is really valuable to understand.
That is, what is multi- and hybrid Cloud, and how do they relate to private and public Cloud platforms?
Now, why this matters is because AWS Azure and the Google Cloud Platform, they're all offering private Cloud environments which can be used in conjunction with their public Clouds.
So to be able to pick when and where to use them effectively, you need to understand when something is multi-Cloud and when it's hybrid Cloud because these are very different things.
So let's jump in and get started.
In the previous lesson, I covered the formal definition of Cloud computing.
Now, I know this was a very dry theoretical lesson, but hopefully you've come out of that understanding what a Cloud environment is.
Now, public Cloud, simply put, is a Cloud environment that's available to the public.
Many vendors are currently offering public Cloud platforms including AWS, Microsoft Azure, and Google Cloud.
These are all examples of public Cloud platforms.
They're public Cloud because they meet the five essential characteristics of Cloud computing and that they're available to the general public.
So to be public Cloud, it needs to first classify as a Cloud environment and then it needs to be available to the general public.
Now, you can if you have very specific needs or if you want to implement something which is highly available, you can choose to use multiple public Cloud platforms in a single system.
Now, that's known as multi-Cloud.
So multi-Cloud is using multiple Cloud environments and the way that you implement this can impact how successful it is.
Now, keeping things simple, you could choose to implement a simple mirrored system.
One part of your system could be hosted inside AWS and the other in Azure.
This means that you've got Cloud provider level resilience.
If one of these vendors fails, you'll know that at least part of your system will remain fully functional and running in the other.
Now, with regards to multi-Cloud, I would personally stay away from any products or vendors who attempt to provide a so-called single management window or single pane of glass if you want to use the jargon when using multiple Cloud platforms.
It is possible to manage multiple Cloud platforms as one single environment, but while it is possible, it abstracts away from these individual environments, relying on the lowest common feature set.
And so you do lose a lot of what makes each vendor special and unique.
So in this example, I could pick AWS and Azure.
I could abstract away from that using a third-party tool.
And when I wanted to provision a virtual machine, that tool would select which Cloud vendor to use.
The problem with that is that it would have to assume a feature set which is available in both of them.
So if AWS had any features that weren't available in Azure or vice versa, this third-party tool could not utilize them while staying abstracted away.
So that's a really important thing to understand.
Generally, when I'm thinking about multi-Cloud environments, I'm looking at it from a highly available perspective.
So putting part of my infrastructure in one and part in another.
It's much simpler and generally much more effective.
Now, each of these three Cloud vendors also offers a solution which can be dedicated to your business and run from your business premises.
This is a so-called private Cloud.
Now, for AWS, this is called AWS Outposts.
For Azure, it's the Azure Stack.
And for Google, it's Anthos.
Now, I want to make a very special point of highlighting that there is a massive difference between having on-premises infrastructure, such as VMware, Hyper-V, or Zen Server, versus having a private Cloud.
A private Cloud still needs to meet the five essential characteristics of Cloud computing, which most traditional infrastructure platforms don't.
So private Cloud is Cloud computing, which meets these five characteristics, but which is dedicated to you as a business.
So with VMware, Hyper-V, or Zen Server implementation, they're not necessarily private Cloud.
A lot of these platforms do have private Cloud-like features, but in general, the only environments that I consider true private Cloud are Outposts, the Azure Stack, and Google Anthos.
Now, it is possible to use private Cloud in conjunction with public Cloud.
And this is called hybrid Cloud.
It's hybrid Cloud only if you use a private Cloud and a public Cloud, cooperating together as a single environment.
It's not hybrid Cloud if you just utilize a public environment such as AWS together with your on-premises equipment.
Now, to add confusion, you might hear people use the term hybrid environment.
And in my experience, people use hybrid environment to refer to the idea of public Cloud used together with existing on-premises infrastructure.
So I'm going to try throughout this course to have separate definitions.
If I use the terms hybrid environment or hybrid networking, then that's different.
That simply means connecting a public Cloud environment through to your on-premises or data-center-based traditional infrastructure.
So there's a difference between hybrid Cloud, which is a formal definition, and then hybrid environment or hybrid networking.
So try and separate those and understand what's meant by age.
With true hybrid Cloud, you get to use the same tooling, the same interfaces, the same processes to interact with both the public and private components.
So let's summarize this.
Public Cloud means to use a single public Cloud environment such as AWS, Azure, or Google Cloud.
Private Cloud is to use on-premises Cloud.
Now, this is important.
This is one of the most important distinctions to make.
For it to be private Cloud, you need to be using an on-premises real Cloud product.
It needs to meet those five essential characteristics.
Multi Cloud means using more than one public Cloud.
So an example of this might be AWS and Azure, or AWS, Azure and Google.
They're examples of a multi Cloud deployment.
So using multiple public Clouds in one deployment, that's a multi Cloud environment.
And I mentioned that earlier in the lesson, that can be as simple as deploying half of your infrastructure to one public Cloud and half to the other, or using a third party tool that abstracts away from a management perspective.
But I would not recommend any abstraction or any third party tools.
Generally, in my experience, the best multi Cloud environments are those which use part of your infrastructure in one Cloud environment and part in the other.
Hybrid Cloud means utilizing public and private Clouds, generally from the same vendor, together as one unified platform.
And then lastly, and probably personally one of the most important points to me, Hybrid Cloud is not utilizing a public Cloud like AWS and connecting it to your legacy on-premises environment.
That is a hybrid environment or a hybrid network.
Hybrid Cloud is a very specific thing.
And I'm stressing this because it is important now to be an effective solutions architect.
You need to have a really good distinction between what public Cloud, private Cloud, multi Cloud, hybrid Cloud and hybrid environments are.
Understand all of those separate definitions.
Now that's all I wanted to cover in this lesson.
I hope it wasn't too dry.
I really do want to make sure that you understand all of these terms on a really foundational level because I think they're really important to be an effective solutions architect.
So go ahead, complete this lesson and then when you're ready, I'll see you in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
In this lesson, I want to introduce Cloud Computing.
It's a phrase that you've most likely heard, and it's a term you probably think you understand pretty well.
Cloud Computing is overused, but unlike most technical jargon, Cloud Computing actually has a formal definition, a set of five characteristics that a system needs to have to be considered cloud, and that's what I want to talk about over the next few minutes in this lesson.
Understanding what makes Cloud 4 Cloud can help you understand what makes Cloud special and help you design cloud solutions.
So let's jump in and get started.
Now because the term Cloud is overused, if you ask 10 people what the term means, you'll likely get 10 different answers.
What's scary is that if those 10 individuals are technical people who work with Cloud day to day, often some of those answers will be wrong.
Because unlike you, these people haven't taken the time to fully understand the fundamentals of Cloud Computing.
To avoid ambiguity, I take my definition of Cloud from a document created by NIST, a NIST at the National Institute of Standards and Technology, which is part of the US Department of Commerce.
NIST creates standards documents, and one such document is named Special Publication 800-145, which I've linked in the lesson text.
The document defines the term Cloud.
It defines five things, five essential characteristics, which a system needs to meet in order to be cloud.
So AWS, Azure, and Google Cloud, they all need to meet all five of these characteristics at a minimum.
They might offer more, but these five are essential.
Now some of these characteristics are logical, and so they may surprise you.
So I've added a couple of things to the document, and I've added a couple of things to the document, and even though you and other business are probably sharing physical hardware, you would never know each other existed, and that's one of the benefits of Boolean.
But on to characteristic number four, which is rapid elasticity.
The NIST document defines this as capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly, outward, and inward, commensurate with demand.
To the consumer, the capabilities available for provisioning often appear to be unlimited, and can be appropriated in any quantity at any time.
Now I simplify this again into two points.
First, capabilities can be elastically provisioned and released to scale rapidly, outward and inward with demand, and in this case, capabilities are just resources.
And second, to the consumer, the capabilities available for provisioning often appear to be unlimited.
Now when most people think about scaling in terms of IT systems, they see a system increasing in size based on organic growth.
Elasticity is just an evolution of that.
A system can start off small, and when system load increases, the system size increases.
But, crucially, with elasticity, when system load decreases, the system can reduce in size.
It means that the cost of a system increases as demand increases, and the system scales, and decrease as demand drops.
Rapid elasticity is this process but automated, so the scaling can occur rapidly in real time with no human interaction.
Cloud vendors need to offer products and features, which monitor load, and allow automated provisioning and termination as load increases and decreases.
Now most businesses won't care about increased system costs.
If, for example, during sale periods, their profits increase, and because the system scales, along with that increased load and increased profits, the customers are kept happy.
Elasticity means that you don't have to, and indeed can't over provision, because over provisioning weighs money.
It also means that you can't under provision and experience performance issues for your customers.
It's how a company like Amazon.com or Netflix can easily handle holiday sales, or handle the load generated on the latest episode of Game of Thrones' release.
The second part is related to that.
A cloud environment shouldn't let you see capacity limits.
If you need 100 virtual machines or 1000, you should be able to get access to them immediately when required.
In the background, the provider is handling the capacity in a pooled way, but from your perspective, you should never really see any capacity limitations.
Now this is, in my opinion, the most important benefit of cloud, systems which scale in size in response to load.
So this is a really important one to make sure that a potential cloud environment offers in order to make sure that it is actually cloud.
Okay, let's move on to the final characteristic, and that's measured service.
Now this document defines this as cloud systems automatically control and optimize resource use by leveraging and metering capability at some level of abstraction appropriate to the type of service.
And it says that resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the service.
Now my simplified version of this is that resource usage can be monitored, controlled, reported, and built.
Traditional and non-cloud infrastructure work to using capex.
You pay for service and hardware in advance.
In the beginning, you had more capacity than you needed, so money was wasted.
Your demand grew over time, and eventually you purchased more service to cope with the demand.
If you did that too slowly, you had performance issues or failures.
With a true cloud environment, it offers on-demand billing.
Your usage is monitored on a constant basis.
You pay for that usage.
This might be a certain amount per second, a minute, an hour, or per day usage of a certain service, for example virtual machines.
Or it could be a certain cost for every gigabyte you store on a storage service for a given month.
You generally pay nothing in advance if it truly is a cloud platform.
If you consume a virtual server for a month, but then for 30 minutes at that month you use 100 virtual servers, then you should pay a small amount for the month and a much larger amount just for that 30 minutes.
Legacy vendors will generally want to feed or buy or lease a server.
If this is the case, they aren't cloud, and they probably don't support some of the massively flexible architectures the cloud allows you to build.
With that being said, that is everything I wanted to cover, so go ahead, complete this video, and when you're ready, I'll join you in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In a previous video I talked about YAML, which is a method of storing and passing data which is human readable.
In this video I want to cover JSON, which is the JavaScript object notation.
Let's jump in and take a quick look and note that many of the topics which I covered in the YAML video also apply here.
With conveying the same information, the format to do so is just different.
So unfortunately we have another mouthful of definition incoming.
So JSON, or the JavaScript object notation, is a lightweight data interchange format.
It's easy for humans to read and write and it's easy for machines to pass and generate.
That's what it says, but there are a few differences that you should be aware of before we move on.
JSON doesn't really care about indentation because everything is enclosed in something, so braces or brackets.
Because of this it can be much more forgiving regarding spacing and positioning.
And secondly because of that, JSON can appear initially harder to read.
But over time I've come to appreciate the way that JSON lays out the structure of its documents.
Now there are two main elements that you need to understand if you want to be competent with JSON.
First, an object or a JSON object.
And this is an unordered set of key value pairs enclosed by curly brackets.
Now from when you watched the YAML video, you should recognise this as a dictionary.
It's the same thing, but in JSON it's called an object.
The second main element in JSON is an array which is an ordered collection of values separated by commas and enclosed in square brackets.
Now from the YAML video you might recognise this as a list.
It's again the same thing, only in JSON it's called an array.
Now in both cases, arrays which are lists of values or objects which are collections of key value pairs, the value can be a string, an object, a number, an array, boolean, true or false, or finally null.
Now with these two high level constructs in mind, let's move on.
So this is an example of a simple JSON document.
Notice how even at the top level there are these curly brackets.
This shows that at the top level a JSON document is simply a JSON object, a collection of key value pairs separated by a colon.
In this example we have three keys.
We have cats, colours and finally, num of eyes.
And each key has a corresponding value in this example which is an array.
The top level key value pair has a value containing an array of cat names.
The middle has a value which is an array of the colour of the cats.
And then the last key value pair has a value which is a list of the number of eyes which each cat has.
Now JSON documents aren't limited to just arrays.
They can be much more complicated like this example.
Now this is a JSON document and every JSON document starts with a top level object, which is an unordered list of key value pairs surrounded by curly brackets.
This object has four key value pairs.
The keys are ruffle, truffles, penny and winky.
The value of each key is a JSON object, a collection of key value pairs.
So JSON objects can be nested within JSON objects, arrays can be ordered lists of JSON objects, which themselves can contain JSON objects.
And again, this lets you create complex structures which can be used by applications to pass or store data and configuration.
Now I'll admit it, I'm actually a fan of JSON.
I think it's actually easier to write and read than YAML.
Many people will disagree and that's fine.
With that being said though, that's everything that I wanted to cover in this video.
So go ahead and complete the video and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this video which will be a fairly high level introduction to YAML.
Now YAML stands for YAML 8 Markup Language and for any key observers that's a recursive acronym.
Now I want this video to be brief but I think it's important that you understand YAML's structure.
So let's jump in and get started.
YAML is a language which is human readable and designed for data serialization.
Now that's a mouthful but put simply it's a language for defining data or configuration which is designed to be human readable.
At a high level a YAML document is an unordered collection of key value pairs separated by a colon.
It's important that you understand this lack of order.
At this top level there is no requirement to order things in a certain way.
Although there may be conventions and standards none of that is imposed by YAML.
An example key value pair might be the key being cat1 and the value being raffle.
One of my cats in this example both the key and the value are just normal strings.
We could further populate our YAML file with a key of cat2 and a value of truffles and other cat of mine.
Or a key of cat3 and a value of penny and a key of cat4 and a value of winkey.
These are all strings.
Now YAML supports other types numbers such as one and two, floating point values such as 1.337, boolean so true or false and even null which represents nothing.
Now YAML also supports other types and one of those are lists known as arrays or other names depending on what if any programming languages that you're used to.
A list is essentially an ordered set of values and in YAML we can represent a list by having a key let's say Adrian's cats.
And then as a value we might have something that looks like this, a comma separated set of values inside swear brackets.
Now this is known as inline format where the list is placed where you expect the value to be after the key and the colon.
Now the same list can also be represented like this where you have the key and then a colon and then you go to a new line and each item in the list is represented by hyphen and then the value.
Now notice how for some of the values are actually enclosed in speech marks or quotation marks and so on.
This is optional.
All of these are valid.
Often though it's safe for you to enclose things as it allows you to be more precise and it avoids confusion.
Now in YAML indentation really matters.
Indentation is always done using spaces and the same level of indentation means that the things are within the same structure.
So we know that because all of these list items are indented by the same amount they're all part of the same list.
We know they're a list because of the hyphens.
So same indent always using hyphens means that they're all part of the same list, same structure.
Now these two styles are two methods for expressing the same thing.
A key called Adrian's cats whose value is a list.
This is the same structure.
It represents the same data.
Now there's one final thing which I want to cover with YAML and that's a dictionary.
A dictionary is just a data structure.
It's a collection of key value pairs which are unordered.
A YAML template has a top level dictionary.
It's a collection of key value pairs.
So let's look at an example.
Now this looks much more complicated but it's not if you just follow it through from the start.
So we start with a key value pair.
Adrian's cats at the top.
So the key is Adrian's cats and the value is a list.
And we can tell that it's a list because of the hyphens which are the same level of indentation.
But, and this is important, notice how for each list item we don't just have the hyphen and a value.
Instead we have the hyphen and for each one we have a collection of key value pairs.
So for the final list item at the bottom we have a dictionary containing a number of key value pairs.
The first has a key of name with a value of winky.
The second a key color with a value of white.
And then for this final list item a key, num of eyes and a value of one.
And each item in this list, each value is a dictionary.
A collection of one or more key value pairs.
So values can be strings, numbers, floats, booleans, lists or dictionaries or a combination of any of them.
Note how the color key value pair in the top list item, so the raffle dictionary at the top, its value is a list.
So this structure that's on screen now, we have Adrian's cats which are a value, has a list.
Each value in the list is a dictionary.
Each dictionary contains a name, key, with a value, a color key, with a value.
And then the third item in the list also has a num of eyes key and a value.
Now using YAML key value pairs, lists and dictionaries allows you to build complex data structures in a way which once you have practice is very human readable.
In this case, it's a database of somebody's cats.
Now YAML can be read into an application or written out by an application.
And YAML is commonly used for the storage and passing of configuration.
For now thanks for watching, go ahead, complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this video where I'm going to step through two concepts which I think should be mandatory knowledge for any solutions architect.
And for anyone else working in IT, these are also really useful.
First we have recovery point objective known as RPO and recovery time objective known as RTO.
Generally if you're a solutions architect helping a client, they will give you their required values for both of these.
In some cases you might need to work with key stakeholders within the business to determine appropriate values.
In either case if you get them wrong it can have a massive negative consequence to a business.
Let's jump in and get started.
I'm going to start by stepping through recovery point objective or RPO.
Recovery point objective or RPO is something that's generally expressed in minutes or hours.
And I'll illustrate this, let's say a given 24 hour period.
It starts on the left and midday, moves through midnight in the middle and finishes at 12 midday on the following day on the right.
I will need to consider an animal rescue business who have animals arriving to be fostered 24/7/365.
They have intake, vet exams and data restored within on-premises systems which need to be referred to constantly throughout the day.
At a certain point in time let's say 2am we have a server failure.
And for this example let's assume this is a single server which stores all the data for the organization and they have no redundancy.
Now this is a terrible situation but it's all too common for cash-strapped charities.
So remember, donate to your local animal rescue centre.
RPO is defined as the maximum amount of data and this is generally expressed in time that can be lost during a disaster recovery situation before that loss will exceed what the organization can tolerate.
If an organization tells you that they have an RPO of 6 hours, it means the organization cannot tolerate more than 6 hours of data loss when recovering from a disaster like this server failure.
Now different organizations will have different RPO values.
Banks logically will be able to tolerate although some know data loss because they deal with customer money.
Whereas an online store might be able to tolerate some data loss as they can in theory recreate orders in other ways.
Understanding how data can be lost during disaster recovery scenarios is key to understanding how to implement a given RPO requirement.
Let's consider this scenario that every 6 hours starting at 3pm on day 1 the business takes a full backup of the server which has failed.
So normally we have a backup at 3pm, one at 9pm, one at 3am and one at 9am.
So 4 backups every 24 hour period split by 6 hours.
In order to recover data from the failed server we need to restore a backup.
Ideally assuming that we have no failures it will be from the most recent backup.
Now successful backups are known as recovery points.
In the case of full backups each successful backup is one recovery point.
If you use full backups and incremental backups it's possible that to restore a single incremental backup, i.e. to use that one recovery point you'll need the most recent full backup and every incremental backup between that full and the most recent okay incremental backup.
So it's possible that a recovery point will need more than one backup.
With this scenario so the server failure at 2am the data loss will be the time between 2am and the most recent recovery point.
In this case 9pm at the previous day.
So this represents 5 hours of lost data.
If the failure occurred right after the 9pm backup had finished we'd have almost no data loss.
If the failure occurred one hour later at 3am we would have 6 hours of data loss.
Now the maximum loss of data for this type of scenario is the time between 2 successful backups.
In our case because backups occur every 6 hours then data loss could be a minimum of 0 if the server failure occurred right after the first backup finished or a maximum of 6 hours if the server failure occurred right before the next scheduled backup.
So when an organisation informs you that they have a requirement for an RPO of 6 hours they're telling you that they can only tolerate a maximum of 6 hours of data loss in a disaster scenario.
And as a general rule this means that you need to make sure that backups occur as often or more often than the RPO value provided by the organisation.
An RPO of 6 hours means at minimum a backup every 6 hours but to cope with random backup failure generally you'll want to make sure backups occur more frequently than required.
So in this example maybe once every 3 hours or maybe even once an hour.
Lower RPOs generally require more frequent backups which historically has resulted in higher cost for backup systems both in terms of media which also licensing, management overhead and other associated processes.
So RPO is a value which is generally given to you by an organisation or you might have to work with an organisation to identify an appropriate value and it states how much maximum loss of data in time the business can tolerate.
Different businesses will have different RPOs and sometimes even different RPOs for different systems within a single organisation.
A bank might have super low RPOs for its financial systems but it might tolerate a much higher one for its website.
If data changes less frequently the system is less important then higher RPO values are easier to tolerate for a business.
Now let's move on and cover recovery time objective or RTO.
To explain RTO we're going to use the same example of a 24 hour period starting at midday on one day moving through that day with midnight in the middle and then moving to midday the following day on the right.
For this example though I've moved the server failure to 10pm on day one and the most recent backup was 9pm so one hour before the failure.
As you know now this means assuming the backup is working i.e. it's a valid recovery point that the data loss will be one hour, the time between the 9pm backup and the 10pm server failure.
RTO or recovery time objective simply put is the maximum tolerable length of time that a system can be down after a failure or disaster occurs.
Now once again just as with RPO this value is something that a business will give you as a directive or alternatively it's something that you'll work with a business on to determine a suitable value.
Also just as with RPOs different businesses will have different RPOs.
A bank will have a much lower RPO for its banking systems than a cafe for its website and an organisation will generally have different RPOs for its different systems.
Critical systems will have lower RPOs and less important systems can potentially have higher RPOs.
Now looking at the RPO definition in a different way, if the animal rescue business had an RPO of 13 hours it would mean that for a server failure which occurred at 10pm the IT operations team would have as a maximum until 11am the following day to fully restore the system to an operational state.
Now something which is really important that I need you to understand and this isn't always obvious especially if you haven't worked on or with a service desk before.
Recovery time of the system begins at the moment of the failure that's when the clock starts ticking and it ends not when the issue is fixed but when the system is handed back to the business in a fully tested state.
So in this example the clock starts at 10pm and to meet an RPO of 13 hours the server needs to be running again fully working by 11am the following day.
So you might ask why I'm stressing this point?
Well RPO isn't just about technical things, the biggest impacts on RPOs are things which as a technical person you might not always identify.
I want to step through some considerations which might impact the ability of this animal rescue's operation team to meet this 13 hour RPO directive.
The first thing which might not be immediately obvious is that to recover a system you need to know that that system has failed.
If the failure occurs at 10pm while the recovery time starts at that point the ability to recover only really starts when you're made aware that the system has failed.
So how long till the operations team know that there is an issue?
Is there monitoring in place on this service?
Is it reliable because too many false positives will have people ignoring outage notifications?
How will the monitoring system notify staff?
Will it wake staff who are sleeping?
Will it be the correct staff, staff who are empowered to begin a recovery process?
This is the real starting point to any recovery process and it often adds lag onto the start of the process.
Even with major internet applications that I use it's not uncommon for outages to occur and then take a further 15 to 30 minutes before the vendor is actively aware and investigating a fault.
Now don't underestimate the importance of effective monitoring and notification systems.
Beyond that make sure that you've planned and configured these processes in advance.
Waking up a junior operations person who has no ability to make decisions or no ability to wake up senior staff members is useless in this scenario.
Best case this part of the process takes some time so make sure that it's built into your planning.
Now let's move on and assume that we do have somebody in the ops team who can begin the process.
Well step number two is going to be to investigate the issue.
It might be something which is fixable quickly or it might be that a server is literally on fire.
Somebody needs to take the time to make the final decision to perform a restore if required and again this will take some time.
Moving on if we assume that we are going to do a restore we need to focus on the backup system.
What type of backups do we have?
Some take longer to restore versus others.
If it's a tape backup system where are the tapes?
Where is the tape drive or the loader?
Who needs to restore it?
Do they need to be in a specific physical location?
How does the restore happen?
Is there a documented process?
And is the person or one of the people who can perform the restore available and awake?
All of these are critical in your ability to begin the restore process and they all take time.
Now this type of disaster recovery scenario if you don't have a documented and tested process this can also be really stressful.
And a stressful situation late at night without your team around you this is when mistakes happen.
But let's assume that we have a working backup system.
We know where the backup media is and we have that and somebody who can operate the restore.
The next step is where we're restoring to.
The server we had has had a major failure or might literally be on fire.
Do you think about this in advance?
These choices so what are we restoring on?
Do we have a spare?
Do we need to order another server?
Are we using physical or virtual servers?
Or are we even forced to use a secondary disaster recovery site because not only is the server on fire but also is the server room.
Many people miss these elements when thinking about RTO but these are the things which really matter.
A badly documented process to restore servers or a slow notification system might add additional hours.
But having to order new server hardware could add days to a recovery time.
And finally for those wells that the restore has completed the operations team thinks the service is back up and running.
Then there needs to be time allocated for business testing, user testing and final handover.
This isn't quick and has to be done before you consider recovery to be complete.
This entire process end to end is what recovery is.
And so if the business has a 13 hour RTO you need to make sure that all of this process in its entirety fits into that 13 hours.
So that's what RTO is, a value given to you by a business or something that you help a business identify.
It's the maximum tolerable time to recover a system in the event that a disaster occurs.
It's the end to end process so this includes fault identification, restoration and final testing and handover.
So it's really important that when you're planning the recovery process for a system and you're given an RTO value by the business you're sure that you have time to perform all of these individual steps.
Now let's quickly summarize what we've learned before we finish this video.
So RPO is how much data the maximum data expressed in time at a business can lose.
So this is amount of data expressed in time beyond which it's not tolerable to the business.
So worst case this is the time between successful backups.
In general to implement more and more demanding RPO directives you need more frequent backups.
This means more cost but it does result in a lower RPO.
So when you see RPO think maximum data loss.
RTO or recovery time objective is a directive from the business which is a maximum restore time that that business can tolerate.
And this is end to end from identification through to final testing and handover.
So this can be reduced by effective planning, monitoring, notification, formal processes, spare hardware, training and more efficient systems such as virtual machines or AWS.
So RTO is the maximum time from when a failure occurs through to when the business will need that system back up and running in an operational state.
And by thinking about this in advance you can make your recovery process more efficient and meet more demanding RTOs from the business.
Now different businesses and different systems within the businesses will have different RPO and RTO values.
Generally the more critical a system is the lower and thus more demanding the RPO and RTO values will be.
And non-critical systems of business is usually more willing to tolerate higher and so less demanding RPO and RTO values.
Because generally what you're looking for is a gold lock point where you're as close to the true business requirements as possible.
Now as a solutions architect it's often the case that the business isn't aware of appropriate RTO and RTO values.
And so one of the core duties when designing new system implementations is to work with the business and understand which systems are critical and which can tolerate more data loss or recovery outages.
And by appropriately designing systems to match the true business requirements you can deliver a system which meets those requirements in a cost effective way.
Now at this point that's everything I want to cover about RPO and RTO at a high level.
If you're doing one of my ADL West courses as you're going through the course consider how you think the products and services being discussed would affect the RPO and RTO's of systems designed utilizing those products.
And if appropriate I'll be discussing exactly how features of those products can influence RPO and RTO values.
At this point though that's everything I wanted to cover in this video.
Thanks for watching.
Go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this fundamentals video, I want to briefly talk about Kubernetes, which is an open source container orchestration system.
You use it to automate the deployment, scaling and management of containerized applications.
At a super high level, Kubernetes lets you run containers in a reliable and scalable way, making a vision fuse of resources, and lets you expose your containerized applications to the outside world or your business.
It's like Docker, only with robots automated and super intelligence for all of the thinking.
Now, Kubernetes is a cloud agnostic product, so you can use it on premises and within many public cloud platforms.
Now, I want to keep this video to a super high level architectural overview, but that's still a lot to cover.
So let's jump in and get started.
Let's quickly step through the architecture of the Kubernetes cluster.
A cluster in Kubernetes is a highly available cluster of compute resources, and these are organized to work as one unit.
The cluster starts with a cluster control plane, which is the part which manages the cluster.
It performs scheduling, application management, scaling and deployment, and much more.
Compute within a Kubernetes cluster is provided via nodes, and these are virtual or physical servers, which function as a worker within the cluster.
These are the things which actually run your containerized applications.
Running on each of the nodes is software, and at minimum, this is container D or another container runtime, which is the software used to handle your container operations.
And next, we have KubeLit, which is an agent to interact with the cluster control plane.
And on each of the nodes communicates with the cluster control plane using Kubernetes API.
Now, this is the top level functionality of the Kubernetes cluster.
The control plane orchestrates containerized applications which run on nodes.
But now let's explore the architecture of control planes and nodes in a little bit more detail.
On this diagram, I've zoomed in a little.
We have the control plane at the top and a single cluster node at the bottom, complete with the minimum Docker and KubeLit software running for control plane communications.
Now, on to step through the main components which might run within the control plane and on the cluster nodes.
Keep in mind, this is a fundamental level video.
It's not meant to be exhaustive.
Kubernetes is a complex topic, so I'm just covering the parts that you need to understand to get started.
Now, the cluster will also likely have many more nodes.
It's rare that you only have one node unless this is a testing environment.
Now, first, I want to talk about pods and pods at the smallest unit of computing within Kubernetes.
You can have pods which have multiple containers and provide shared storage and networking for those pods.
But it's very common to see a one-container, one-pod architecture, which as the name suggests, means each pod contains only one container.
Now, when you think about Kubernetes, don't think about containers.
Think about pods.
You're going to be working with pods and you're going to be managing pods.
The pods handle the containers within them.
Architecturally, you would generally only run multiple containers in a pod when those containers are tightly coupled and require close proximity and rely on each other in a very tightly coupled way.
Additionally, although you'll be exposed to pods, you'll rarely manage them directly.
Pods are non-permanent things.
In order to get the maximum value from Kubernetes, you need to view pods as temporary things which are created, do a job, and are then disposed of.
Pods can be deleted when finished, evicted for lack of resources, or the node itself fails.
They aren't permanent and aren't designed to be viewed as highly available entities.
There are other things linked to pods which provide more permanence, but more on that elsewhere.
So now let's talk about what runs on the control plane.
Firstly, I've already mentioned this one, the API, known formally as Q-API server.
This is the front end for the control plane.
It's what everything generally interacts with to communicate with the control plane, and it can be scaled horizontally for performance and to ensure high availability.
Next, we have ETCD, and this provides a highly available key value store.
So a simple database running within the cluster, which acts as the main backing store for data for the cluster.
Another important control plane component is Q-scheduler, and this is responsible for constantly checking for any pods within the cluster which you don't have a node assigned.
And then it assigns a node to that pod based on resource requirements, deadlines, affinity, or anti-affinity, data locality needs, and any other constraints.
Remember, nodes are the things which provide the raw compute and other resources to the cluster, and it's this component which makes sure the nodes get utilized effectively.
Next, we have an optional component, the Cloud Controller Manager, and this is what allows Kubernetes to integrate with any cloud providers.
It's common that Kubernetes runs on top of other cloud platforms such as AWS, Azure, or GCP, and it's this component which allows the control plane to closely interact with those platforms.
Now, it is entirely optional, and if you run a small Kubernetes deployment at home, you probably won't be using this component.
Now, lastly, in the control plane is the Q-Controller Manager, and this is actually a collection of processors.
We've got the node controller, which is responsible for monitoring and responding to any node outages, the job controller, which is responsible for running pods in order to execute jobs, the endpoint controller, which populates endpoints in the cluster, more on this in a second, but this is something that links services to pods.
Again, I'll be covering this very shortly.
And then the service account and token controller, which is responsible for account and API token creation.
Now, again, I haven't spoken about services or endpoints yet, just stick with me.
I will in a second.
Now, lastly, on every node is something called K-Proxy, known as Cube Proxy, and this runs on every node and coordinates networking with the cluster control plane.
It helps implement services and configs rules allowing communications with pods from inside or outside of the cluster.
You might have a Kubernetes cluster, but you're going to want some level of communication with the outside world, and that's what Cube Proxy provides.
Now, that's the architecture of the cluster and nodes in a little bit more detail, but I want to finish this introduction video with a few summary points of the terms that you're going to come across.
So, let's talk about the key components.
So, we start with the cluster, and conceptually, this is a deployment of Kubernetes.
It provides management orchestration, healing, and service access.
Within a cluster, we've got the nodes which provide the actual compute resources, and pods run on these nodes.
A pod is one or more containers, and it's the smallest admin unit within Kubernetes, and often, as I mentioned previously, you're going to see the one container, one pod architecture.
Simply put, it's cleaner.
Now, a pod is not a permanent thing, it's not long-lived.
The cluster can and does replace them as required.
Services provide an abstraction from pods, so the service is typically what you will understand as an application.
An application can be containerized across many pods, but the service is the consistent thing, the abstraction.
Service is what you interact with if you access a containerized application.
Now, we've also got a job, and a job is an ad hoc thing inside the cluster.
Think of it as the name suggests, as a job.
A job creates one or more pods, runs until it completes, retries if required, and then finishes.
Now, jobs might be used as back-end isolated pieces of work within a cluster.
Now, something new that I haven't covered yet, and that's Ingress.
Ingress is how something external to the cluster can access a service.
So, you have external users, they come into an Ingress, that's routed through the cluster to a service, the service points at one or more pods, which provides the actual application.
So, Ingress is something that you will have exposure to when you start working with Kubernetes.
And next is an Ingress controller, and that's a piece of software which actually arranges for the underlying hardware to allow Ingress.
For example, there is an AWS load balancer, Ingress controller, which uses application and network load balancers to allow the Ingress.
But there are also other controllers such as Nginx and others for various cloud platforms.
Now, finally, and this one is really important, generally it's best to architect things within Kubernetes to be stateless from a pod perspective.
Remember, pods are temporary.
If your application has any form of long-running state, then you need a way to store that state somewhere.
Now, state can be session data, but also data in the more traditional sense.
Any storage in Kubernetes by default is ephemeral, provided locally by a node, and thus, if a pod moves between nodes, then that storage is lost.
Conceptually, think of this like instant store volumes running on AWS EC2.
Now, you can configure persistent storage known as persistent volumes or PVs, and these are volumes whose lifecycle lives beyond any one single pod, which is using them.
And this is how you would provision normal long-running storage to your containerized applications.
Now, the details of this are a little bit beyond this introduction level video, but I wanted you to be aware of this functionality.
OK, so that's a high-level introduction to Kubernetes.
It's a pretty broad and complex product, but it's super powerful when you know how to use it.
This video only scratches the surface.
If you're watching this as part of my AWS courses, then I'm going to have follow-up videos which step through how AWS implements Kubernetes with their EKS service.
If you're taking any of the more technically deep AWS courses, then maybe other deep-dive videos into specific areas that you need to be aware of.
So there may be additional videos covering individual topics at a much deeper level.
If there are no additional videos, then don't worry, because that's everything that you need to be aware of.
Thanks for watching this video.
Go ahead and complete the video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back in this video I want to talk about the DNS signing ceremony.
If you're imagining confetti and champagne right now, it's the opposite of that kind of ceremony.
This ceremony is all about controlling the keys to the internet, more specifically the trust anchor of the DNS system.
It's one of the most important meetings which occur in the technical space.
Pretty much everything that you use on the internet is enabled by the technical act which occurs within the ceremony.
Now before I cover what the ceremony is, we need to understand why anything like this is needed.
Trust within a DNS zone is normally provided via the parent zone of that zone.
The parent zone has a DS record which is a hash of the public key signing key of the child zone, and that's how the trust chain is created.
In the case of the root zone, there is no parent zone and this means there's nothing to provide that trust.
And so a more rigorous process is required, something which is secure enough that the output can be absolutely trusted by every DNSSEC resolver and client.
And we refer to this concept as a trust anchor.
Locked away within two secure locations, one in California and another in Virginia, is what amounts to the keys of the internet.
The private DNS root key signing key known as a KSK.
Now it's impossible to overstate how important this set of keys is to the internet.
They rarely change and the trust in them is hard coded into all DNSSEC clients.
With them you can define what's valid on the DNSSEC root zone.
Because of this, every child's top level domain, every child's zone, inside those and every DNS record, these are locked away, protected and never exposed.
And they use redundant hardware security modules also redundant across physical locations.
Now I'll detail this more in a second.
With access to the private keys is controlled via the fact that HSMs are used.
You can only use the HSMs in tightly controlled ways.
The keys never leave those HSMs, those HSMs never leave those locations.
And you can only use them within those locations if you have the right group of people.
And people can only get into those locations after going through a rigorous multi-stage ID process.
Now why this is important is because we all know a public part of this key.
It's part of the DNS key record set within the root zone along with the public zone signing key.
To recite this, every DNSSEC client and resolver on the planet explicitly trusts this key, this key signing key.
And if we have this public root zone key signing key, we can verify anything signed by the private key, the one that's locked away within the hardware security modules.
Because the security of the private root key signing key is so tight, it's not practical to use constantly.
And so there's another key pair which controls the security of the DNS root zone.
This is known as the root zone ZSK or zone signing key.
The whole function of this massively controlled ceremony is to take the root zone ZSK, take this into the ceremony, sign it with the private root zone KSK, within these hugely tightly controlled conditions and then produce as an output the root zone RRSIG DNS key.
This single record is why DNS from this level down through the top level domains and into the domain zones, this is why it's all trusted because the root zone is trusted via this signing process.
Now talking through the detail of the signing ceremony would take too long.
And so I've included a link attached to this video which gives a detailed overview of the ceremonial process.
These ceremonies are public and recorded and links detailing all of this process is included attached to this video.
What I want to do now is to talk in a little bit more detail about why this process is so secure.
The signing ceremony itself takes place, as I mentioned previously, in one of two secure locations.
Now there are a few key sets of people involved.
I'll not show them all on screen, but we have the ceremony administrator, an internal witness, the credential safe controller, the hardware safe controller, and then crypto officer 1, crypto officer 2 and crypto officer 3.
Now there are a total of 14 of these crypto officers.
Seven of them are affiliated with each of the locations and at least three are required to attend for the process to work.
Logistically, dates and times of it 24 to 5 are available to ensure some level of resilience.
The most important part of the whole process hardware wise is the hardware security module or HSM, which is the hardware which contains the root zone private key signing key.
This device is protected and can only be interacted with via the ceremony laptop which is connected to the HSM over ethernet and this is only operated by the ceremony administrator.
The laptop has no battery and no storage.
It's designed to be stateless and only used to perform the ceremony and not store any data afterwards.
Now the HSM device can only be used when crypto officers use their allocated cards.
What's being signed are the public key signing key and zone signing keys.
We'll actually a pack of them to allow for rotation between this ceremony date and the next.
The HSM device via the ceremony laptop then outputs the signatures for these keys which become the DNS key, RRC records for the DNS root zone.
Now again this process happens every three months.
It's generally broadcast, notes of the process are available and it's publicly audited.
Now this is a summary of the process.
The level of security procedure which goes into ensuring that groups of human participants can't collude and corrupt the process is extreme.
I've included additional links attached to this video which provide more detail if you're interested.
For this video I just want you to have an understanding of why the ceremony is so important.
So during the ceremony we take the root key signing keys which everything trusts but which are too important to be used day to day.
And we use those to sign root zone signing keys which can be used on a more operational basis and these can be used to sign individual resource record sets within the root zone.
And it's that public and private zone signing key pair which is then used to create the chain of trust which allows trust in top level domains.
And then those top level domains can pass that trust on to domains and then in domains that trust can be passed to individual resource record sets.
And this all has to happen because we have nothing above the root key signing keys in the DNS.
There are trust anchor.
Nothing makes us trust them other than the trust itself.
And the ceremony ensures that it's almost impossible to corrupt that trust.
At this point that is everything I wanted to cover in this video.
So go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this video I want to help you understand how the chain of trust works within DNSSEC.
Now we've got a lot to cover, so let's jump in and get started straight away.
I want to first talk about DS records known as delegated sign records, because this is how DNSSEC creates a chain of trust between parent and child.
On the right we have the ICANN.org zone, the one that we finished with in the previous video.
It contains the public key signing key for that zone and some other DNS stuff.
Now I'm not evaluating the other DNS stuff, but because of the hierarchical architecture of DNS we don't have to think about it at this level.
All that matters when we move up a level are the ICANN.org name servers for normal DNS and the public key signing key for DNSSEC.
On the left of the screen we have the parent zone for ICANN.org, which is the org top level domain zone.
This is one level up in the DNS hierarchy.
In this part of the video I want to focus on how the .org zone delegates to the ICANN.org zone for ICANN.org, and how DNSSEC delegates trust via the trust chain to this child zone.
Now because DNSSEC builds on top of DNS we start with the delegation for DNS.
This is in the form of name server resource records, which point at the name servers hosting the ICANN.org zone.
This just integrates the ICANN.org domain into the DNS infrastructure.
It doesn't provide any of the verification which we're looking for from DNSSEC.
Now there's a matching RRC for this RRC set which verifies these records, but for the parent to child zone trust we need more.
The .org zone needs to explicitly state that it trusts the ICANN.org child zone, and this is done using a delegated signer record set.
And what this does is store a hash of the child domain's public key signing key in this record in the parent zone.
Since the hash is one way and since they're unique, adding this record shows that the parent zone trusts the child's key signing key for that child zone.
So at this point the .org zone is confirming its trust of the ICANN.org KSK via this record set.
And like everything else relating to DNSSEC we need to provide a mechanism to validate this DS record set.
This means a matching RRC which is a digital signature of the DS RRC, made using the .org zone's private zone signing key.
At this point all of the records can be validated within .org and we can confirm the trust of ICANN.org.
Now we just need to add the records which store the .org zone's public keys.
So this means a DNS key record set containing the .org public zone signing and key signing keys.
And these are used to validate RRC's in the zone.
And this DNS key RRC needs a matching RRC which is created by signing the DNS key RRC with the zone's key signing key.
At this point if you trust the .org zone's public key signing key then you trust everything in the .org zone including the trust delegation to the ICANN.org child zone.
And how do we trust this in the same way that the child ICANN.org zone was trusted by the .org zone?
Trust is given from the parent zone which in this case is the root zone.
This is the process how a parent zone trusts a child via the DS record set which contains a hash of the child zone's key signing key.
Now if we zoom out a little bit I want to talk about the DNSSEC validation flow.
So how we build this chain of trust between the different levels of the DNS hierarchy.
So the trust between levels from child to parent is created via public key signing keys and the DS record sets in the parent zone.
So the ICANN.org zone it has a public key signing key.
The .org zone validates this by having a DS record and storing a hash of that key signing key in that record set.
The .org zone has its own key signing key.
And the .org zone is trusted because a hash of this key is stored in the corresponding DS record in the root zone.
At the DNS root zone level we hit a problem because there's no parent zone to provide this trust.
The root zone has a public key signing key and this and the corresponding private root key signing key.
These are explicitly trusted.
I'll talk about how this explicit trust works and what happens in the video dedicated to the key signing ceremony.
But at this stage just take it as fact that the root key signing keys are trusted explicitly.
They're known as a trust anchor and every DNSSEC capable client or resolver by default trusts these keys.
Because there's no parent zone to the root zone this has to be a trust anchor.
Something that just is.
So because we trust the root key signing keys it means that a DNSSEC capable resolver can follow this chain from root through to the DNS record set.
So the root zone key signing key has signed the root zone zone signing key.
This is signed the .org DS record in the root zone.
So this can now be cryptographically validated.
This record matches the public key signing key in the org zone.
And so this too can be cryptographically validated.
Inside the org zone the key signing key has signed the zone signing key and this has signed record sets including the DS record set for ICANN.org inside the .org zone.
And so this also can be validated.
This contains a hash of the public key signing key for the ICANN.org zone.
And so lastly this too can also be cryptographically validated.
And this is how you build a chain of trust from root through to record sets.
We start with the trust anchor that's created during the key signing ceremony that I'm going to be talking about in a separate video.
And then at every level of the DNS hierarchy by using DS record sets so delegated to signers.
We create a hash of the key signing key of the child domain and this creates the trust from parent to child.
And we follow that hierarchical trust all the way down to the actual record sets.
And this is how we can trust the contents of all the TLVs.
This is how the TLVs trust any domain zones.
And this is how we can trust the contents of any record sets within those domain zones.
And all of this can be cryptographically validated at every step.
And this is one of the major benefits that DNSSEC provides.
Now this point has everything I wanted to talk about in this video.
In the next video I'm going to be detailing exactly how the key signing ceremony works and why this is such a critical event for the security of DNS using DNSSEC.
But at this point you can go ahead and complete this video.
And when you're ready, I look forward to you joining me in the next. in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this video, I want to step through how DNSSEC works inside his own, specifically how it allows a DNSSEC resolver or client to validate any resource records within his own.
This video is focusing on the data integrity part of DNSSEC.
And coming up after this is another video where I'll cover the chain of trust and origin authentication benefits which DNSSEC provides in a lot more detail.
Now, because this video covers digital signing within DNSSEC, it's important that you've watched my previous videos on DNS, on hashing, and on digital signing.
If you haven't, all of those videos will be linked in the description.
If you have, then let's jump in and get started.
Now, to understand DNSSEC, I first need to introduce a term, and that's a resource record set or RR set.
Let's look visually at what this is.
We'll start with the DNS zone of ican.org, and I'm using this as an example in this lesson as it's one which I know to be DNSSEC enabled.
Now, inside his zone, we have a number of resource records.
First, www.ican.org, which is a CNAME record.
And remember, CNAMES point at other records.
And in this case, it points to two other records.
One of them is an A record, so IP version 4, and the other is an AAA record, which is IP version 6.
Now, finally, we have four MX records for the domain pointing at four different mail exchange servers.
Now, each of these are resource records.
I'm showing a total of seven.
So what's a resource record set?
Well, a resource record set or RR set is any records of the same name and the same type.
So in the case of the left three, this means each of them is their own RR set.
But in the case of the MX records, they all have the same name, so ican.org, and they're all MX records.
And this means that all four of these are inside one RR set.
RR sets are just sets of resource records.
They make it easier to deal with records in groups versus single records.
And in the case of DNS set, it keeps things manageable in other ways, but more on that in a second.
Now, this is what an RR set might look like if you actually interact with DNS.
Notice how all the names are the same and all the types are the same.
So why do you need to know this?
Well, because RR sets are used within DNS set.
Right now, there's no way to tell if any of these resource records are valid.
DNS set provides this functionality, but it's not individual resource records which are validated by DNS set.
It's resource record sets or RR sets.
Now, let's take a look at how this works.
So DNS allows us to validate data integrity of record sets within DNS.
It doesn't work on individual records, rather it works on sets of records, RR sets.
Let's take a look at how.
So we start with the same ican.org zone, and inside here I'm going to step through one RR set example, the set of four resource records which make up the MX RR set.
Right now, without DNS set, if a bad actor found a way to change or make you think that these records have changed, then email delivery could, in theory, be redirected.
DNS set helps prevent that using two features.
First, we have RR sync, which stores a digital signature of an RR set using public and private pairs of keys.
This one key pair is known as the zone signing key or ZSK.
The private part of this key pair is sensitive and it's not actually stored within the zone.
It's kept offline, it's kept separated, but you need to know that it exists.
Like any private key, you need to keep this key safe and not accessible from the public domain.
So once again, an RR sync contains a digital signature of an RR set.
So we take the RR set which is plain text, we run it through a signing process, let's call this the digital signature atron 9000.
In reality, it's just a standard cryptographic process.
This process uses the private part of the ZSK to create a signature, and this is why it's important to keep the private part of this key safe.
This output, the signature can be stored alongside the plain text RR set in the zone using the same name, but the record type is RR sync.
Any normal DNS clients will only see the RR set, any DNS set clients will see the RR set and the corresponding RR sync.
Now this uses digital signing and hashing.
If the RR set changes, the RR sync has to be regenerated in order to be valid.
If the RR set changes without a corresponding change to the RR sync, the result is an invalid signature.
And so you can tell if anything has changed without the approval of the person controlling the private zone signing key.
And this is because only the private part of the zone signing key can be used to sign RR sets creating an RR sync.
Assuming that you trust that the private zone signing key is in safe hands, then you know that if there's a valid RR sync for a corresponding RR set, that RR set is in a valid state created by the zone admin, and if it changes, you can tell.
Now the important question is, how can a DNS client or resolver verify the RR sync?
For that, there's another piece to the DNS set puzzle.
We need the public part of the zone signing key to be able to verify signatures or RRs sync created using the private part.
Lucky for us, public parts of the key pairs aren't sensitive and don't need to be guarded.
We just need a way to make them available.
So consider this scenario, we have the same ICANN.org domain.
We also have a DNS set resolver here at the bottom.
How do we know it's a DNS set resolver?
You'll just have to trust me, but it is a super smart resolver.
Now inside the zone, we have the MX RR set for the ICANN.org zone.
We also have the MX RR sync, which remember is a signature for the RR set created using the private part of the zone signing key.
Inside a DNS set enabled zone, you'll find another record, the DNS key record.
The DNS key record stores public keys.
These public keys can be used to verify any RRs in the zone.
DNS key records can store the public key for the zone signing key, so the ZSK, and also a different type of key, the key signing key or KSK.
But more on this in a second.
We're going to do this step by step.
This is what a DNS key record might look like, and because it can store different public keys, there's a flag value.
A value of 256 means that it's a zone signing key, and a value of 257 means it's a key signing key.
So the top one here, this is the zone signing key, and it's this value, which is the last piece of the puzzle.
It means the DNS resolver can take the RR set, which remember is the plaintext part, and using this together with the matching RRsig and the DNS key record, it can verify that both the RRsig matches the RR set, and the signature was generated using the private part of the zone signing key.
And the result of this is that our DNS-seq-capable resolver at the bottom can verify the RR set is valid and hasn't been compromised.
Now this is all assuming, and this is a big assumption, that we trust the DNS key, specifically the zone signing key.
You have to trust that only the zone admin has the private part of the key, and you also have to trust that it's the correct zone signing key.
If you do, you can trust the RRsig and matching RR set are valid.
Now a real-world comparison of this would be to imagine if somebody shows you an ID card which has their photo on it.
The photo ID only proves their identity if you trust the photo ID is real and it was created by a genuine authority entity.
In humans, this trust is a bit fake, that's why fake IDs are such a problem.
This isn't a problem with DNS-seq, because as you'll see, we have a chain of verifiable trust all the way to the DNS route.
The DNS key record also requires a signature, and this means a matching RRsig record to validate that it hasn't been changed.
The DNS key record, though, is assigned with a different key, the key signing key, or KSK, so as the name suggests, this key isn't used for signing anything in the zone, instead it's used for signing keys.
So the zone signing key is used for signing everything in a zone, so to create most RRsig records, except the DNS key records, these are signed by the key signing key creating the DNS key RRsig record.
Now I get it, I've just introduced another type of key, so let's look at how this all fits together within a DNS zone.
At this point, I want to cover two really important points about DNS-seq.
First, how does a key signing key fit into all this and why do we have them?
And second, what mechanism allows us to trust a zone?
We know that RRsig records let DNS-seq resolvers verify record sets, but how do we know the keys used within a zone themselves can be trusted?
To illustrate both of these, let's start with the same ICANN.org zone, and then off to the right side, a related area, but containing more sensitive things, this might be a physical key store like a FileSafe or a HSM device.
The container here to start with is the private zone signing key, and this is used together with an RRset record to create a corresponding RRsig record.
Then, the public part of this zone signing key is stored in the DNS key record.
The flag of 256 tells us that it's a zone signing key.
At this point, I want to pause and take a quick detour.
If this was all that we had, so the DNS key, we couldn't trust it.
Somebody could swap out the DNS key record, put in a new public key in there, use the private part of that fake key to regenerate a fake RRsig, adjust the RRset and take over email for this domain.
We need a way of ensuring the zone signing key is trusted.
If we didn't have some kind of trust chain, we would need to manually trust every zone within DNS.
That would defeat the purpose of having a globally distributed system.
So, the way that this works is that this zone, so ICANN.org, is linked cryptographically to the parent zone, which is .org.
So, just like with normal DNS where name server records are used in the org zone to delegate to domains such as ICANN.org, the org parent zone also has a way to explicitly state that we can trust ICANN.org as a zone.
And I'll talk about exactly how this works in the next video.
For now, I want to focus on this zone.
Now, if we use a single key, so just the zone signing key, that would work.
But this would mean if we ever wanted to change the zone signing key, then we would have to involve the .org parent zone.
Best practice is that we want to be cycling keys fairly often, and doing that where it also requires updates up the chain would become inefficient.
And so we have this new key pair, the key signing key, or KSK.
Now, there's a private part in a public heart.
The private part is used to sign DNS key record sets, which creates an RRSIG of the DNS key.
And this makes it easy to change the zone signing key used for a zone.
We just have to regenerate all of the RRSIG records, update the DNS key record set, and then regenerate the RRSIG of the DNS key record set using the private key signing key.
All of this is inside our zone only.
It doesn't involve the parent zone in any way.
We store the public part of the key signing key in the DNS key record set.
But now we have a new problem.
How can we trust the key signing key?
Well, spoiler, it's referenced from the parent zone, in this case .org.
So remember how I said that the DNS key record set stored both zone signing and key signing public keys?
Well, this is the main point of trust for the zone.
This is how trust is conveyed into the zone.
Because the parent zone links to the public key signing key of our zone, assuming we can trust the .org parent zone, because it references our zone's key signing key, we can trust our zone's key signing key.
This key signing key signs our zone signing key, and the zone signing key signs all of the RR sets to create RR6.
We have a chain of trust created between two different layers of DNS, specifically DNSSEC.
Now, we're at the point now where you should understand how DNSSEC validates things within a single zone.
How it uses RR6 to validate RR sets, how it uses the DNS key records to get the public keys to deal with that validation using the zone signing key.
How a key signing key is used to create an RRSIG at the DNS key, which allows the validation of that DNS key record.
And how the parent domain or parent zone trusts the public key signing key at the child domain or zone.
Now, you don't know how this trust occurs yet.
That's something I'm going to be talking about in the next video.
I've also stepped through why two different keys are needed.
Using a zone signing key for signing within a zone, and a key signing key for signing that key, that allows an admin split.
The key signing key can be referenced from the parent zone, while the zone signing key is used exclusively within the zone.
And this means that a zone signing key can be changed without requiring any changes to the parent zone.
So what position does that put us in?
With this functionality using DNSSEC, what have we gained in the way of additional security?
Well, we can now verify the integrity of data within this specific DNS zone.
So, we've eliminated the DNS cache poisoning example.
Assuming we trust the key signing key, and also assuming the key signing key hasn't been changed as part of an exploit, then we can trust all of the information contained within a zone.
Now, in the next video, I'm going to step you through exactly how DNSSEC creates this chain of trust.
And this allows a parent zone to indicate that it trusts the key signing key used within a child zone.
And the same architecture at every level of DNSSEC means that we can create this entire end-to-end chain of trust, which can be verified cryptographically.
Now, at this point, that's all I wanted to cover in this video.
In the next video, I'm going to step through how this trust from a parent zone to a child zone is handled within the DNSSEC hierarchy.
And we'll go through how the query flow works step by step.
For now, though, go ahead and complete this video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video of my DNS series I want to talk about DNSsec.
Now this is the first video of a set which will cover DNSsec which provides us with a lot of additional functionality and so requires a few dedicated videos.
Now to get the most from this video it's important that you've watched my previous videos on DNS as well as on hashing and digital signing.
If you haven't all the videos that you need I'll link in the description and you should watch those first or have equivalent knowledge.
DNSsec is a secure add-on for DNS, it provides additional functionality.
In this video I want to set the scene by talking about why we need DNSsec so let's jump in and get started.
Now I promise this bit will be super quick but I do need to use some bullet points.
I hate bullet points as much as anyone but sometimes they're just the quickest way to present information so let's go and please stick with me through this first set of important points.
Now DNSsec provides two main improvements over DNS.
First data origin authentication and this allows you to verify that the data that you receive is from the zone that you think it is.
Are the records returned to you from Netflix.com from the real Netflix.com.
If you're looking at cash results are they really from that original zone?
Second DNSsec provides data integrity protection so has the data that you receive been modified in any way since it was created by the administrator of the zone.
So if you have Netflix.com data is that the same un-changed Netflix.com data which the administrator of the Netflix.com zone created.
Now it does both of these things by establishing a chain of trust between the DNS route and DNS records but it does this in a cryptographically verifiable way where DNS has some major security holes DNSsec uses public key cryptography to secure itself in a similar way to how HTTPS and certificates secure the HTTP protocol.
It means that at each stage you can trust if a child's zone has the trust of a parent's zone and you can verify that the data contained within that zone hasn't been compromised.
Now another really critical part of DNSsec to understand before we touch upon why it's needed is the fact that it's additive.
It adds to DNS it doesn't replace it.
It's more accurate to think of any queries that you perform as either using DNS on its own or DNS plus DNSsec.
Conceptually imagine DNS at the bottom with DNSsec led on top.
Now in a situation where no DNS exploits have taken place it means that the results will be largely the same between DNS and DNSsec.
Let's say that we have two devices the one on the left is DNS only and the one on the right is DNSsec capable.
When querying the same DNS name servers the DNS only device will only receive DNS results.
It won't be exposed to DNSsec functionality and this is critical to understand because this is how DNSsec achieves backward compatibility.
A DNSsec capable device though this can do more than just normal DNS so it still makes queries and gets back DNS results but it also gets back DNSsec results and it can use these DNSsec results to validate the DNS results.
Assuming no bad actors have corrupted DNS in any way then this will go unnoticed since the results are the same but consider a scenario where we have changed DNS data in some way.
So again we have two devices the device on the left is DNSsec capable and the one on the right is standard DNS.
So the DNS only device performs a query and it thinks it's getting back the genuine website result only it isn't.
In an exploited environment it will be unaware that the result it gets the queries are bad and so the website it browses through might not actually be the one that it expects.
With DNSsec the initial query will occur in the same way and even though it's corrupt and will look valid what follows is that DNSsec will verify the result and because public private key cryptography is used together with a chain of trust architecture DNSsec will be able to identify that records have changed or they come from an origin which isn't the one that we're querying.
Now it's important to understand that DNSsec doesn't correct anything it only allows you to validate if something is genuinely from a certain source or not and if it's been altered or not it doesn't show you what the result should be but in most cases it's enough to know the integrity of something is valid or in doubt.
Now it might help you to understand one of the common risks posed by normal DNS if we step through it visually so let's do that.
Consider this architecture we have Bob on the left who's about to perform a query for the IP address of Categorum.io using this resolver server but we also have a bad actor Evil Bob at the bottom and in advance to perform this exploit he performs a query for Categorum.io this begins the process of walking the tree but while that's happening during this process where the resolver is walking the tree to get the true result Evil Bob responds with a fake response it pretends to be the real server so even while the real process is continuing Evil Bob enters false information into the resolver server in the middle and this result is now cash and the cash has been poisoned with bad data this means that when Bob queries for some Categorum.io records he's going to get the poisoned result this result is going to be returned to Bob and the effect of this is that Bob is going to be directed at something that he thinks is Categorum.io but isn't now this is just one way that DNS can be disrupted it's over simplified and there are some protections against it which illustrates how DNS isn't secure it was built during a time period where the internet was viewed as largely friendly rather than the adversarial zone which he now is from a security perspective now this point I just want to switch across to my command prompt and show you how a normal DNS query differs from a DNS sec query so let's go ahead and do that okay so we've moved across to my terminal and I'm just going to go ahead and use the dig utility which is a DNS utility and I'm going to perform a normal DNS query so this is the command dig its face www.icam.org and if I run at this query this is the result that we receive it's this answer section which I want to focus on now just to reiterate this is the query that I performed for this DNS name so in the answer section we have a result for www.icam.org and the result is that it's a C name and the C name points at another DNS record in this particular case www.vip.icam.org so directly below we can see this DNS record so www.vip.icam.org this time it's an A record an A records point at IP version 4 addresses and this is the IP version 4 address which corresponds to this DNS name and this DNS name maps back to our original query now because this is normal DNS we have no method of validating the integrity of this data you can see here that I'm querying this DNS server so 8.8.8.8 and this is not a DNS server that's affiliated with the ICANN organization so this result is not authoritative it's possible that this data is not valid either by accident or because it's been deliberately manipulated now DNSSEC helps us to fix this risk and let me show you how I'm going to start by clearing the screen and then I'm going to run the same query but using DNSSEC and I can do that using this command adding this additional option on the end of plus DNSSEC when I run this I receive both DNS and DNSSEC results so www.icam.org is a C name and it points at this DNS record slightly below it we can see the record that it points at this is an A record and just as with the previous query results it points at this IP version 4 address now what you'll notice is for each of these normal DNS results we also have this RRSEC and this is a DNSSEC resource type this is basically a digital signature of the record that it corresponds to and I'll show you in the next video how this digital signature can be used to validate the normal DNS data that's stored within this zone so we can query for normal DNS results and then validate the integrity of those results using DNSSEC now in this part of this lesson I just want to demonstrate exactly how a DNSSEC query result differs from a normal DNS result in the next video I'm going to expand on this and set you through exactly how these signatures work within a DNS zone at this point let's move back to the visual okay so what I just demonstrated is a way to avoid this kind of attack because even if a cache was poisoned a DNSSEC capable resolver would be able to identify the poison data and that alone is a huge improvement over standard DNS so at this point I hope you have a good idea of some of the ways which DNSSEC improves normal DNS that's it Identify the poisoned data and that alone is a huge improvement over-sanded at DNS.
So at this point I hope you have a good idea of some of the ways which DNSSEC improves normal DNS.
That's it for this video and the next one we're going to explore exactly how DNSSEC works in detail.
Now it's a lot to get through so I wanted to make sure that each different area of DNSSEC functionality has its own dedicated video.
At this point though, thanks for watching, go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this fourth part of this video series I want to cover how a domain is registered.
So what entities are involved and exactly how this new domain is integrated with DNS.
So let's jump in and get started.
The process of registering a domain includes a few key entities.
First, the person registering the domain.
Secondly, the domain registrar.
Examples of this might include Route 53 or Hover.
Next is the DNS hosting provider.
Examples again include Route 53 or Hover.
We have the TLD registry.
In the case of the .com/tld this is Verisign and then finally the .com/tld zone managed by the same entity.
Now one really important and often confusing element to this is the difference between the registrar and the DNS hosting provider.
These are different things, different functions.
The registrar has one function to let you purchase domains.
And to allow this they have a relationship with the TLD registry for many top level domains.
You might use Hover for example to purchase .com domains, .io domains, .org domains and so on.
And for each of these they will communicate with a different TLD registry.
So the registrar lets you register i.e. purchase domains.
A DNS hosting provider they operate DNS name servers which can host DNS zones and they allow you to manage the content of those zones.
Now why this is confusing is that some companies are only registrars, some companies are only DNS hosting providers but some like Hover and Route 53 can do both.
If you know AWS if you work with Route 53 registered domains area of the console then this is the domain registrar function.
If you work in the hosted zones area this is the DNS hosting provider area.
Try and think of them as two separate things because it makes the explanation of domain registration much more logical.
Step number one in the domain registration process assuming the domain is available is that we pay for the domain via the domain registrar.
Examples include GoDaddy, Route 53, Hover and many more.
It's at this point that we're going to need a DNS zone for the domain being registered and this zone needs to be hosted on some DNS name servers.
So if the DNS hosting provider is the same company as the registrar the zone is created and hosted automatically.
If it's a different company you'll be asked for the name server information where the zone is hosted already and this has to be configured separately.
So at this point we have a domain being registered.
We have a DNS zone ready to go hosted on some name servers and we have all the networking information for those name servers.
So next the registrar communicates this to the registry for the TLD.
In the case of the .com TLD this is Verisign.
Now next Verisign assuming everything is good they add all of those details to the .com TLD zone and at this point the domain is live.
For a domain to be live the name servers which host the zone need to be pointed at from the relevant TLD zone.
If this zone ever changes for example if it's moved to different name servers the entries in the TLDs or the NS records pointing at the name servers for this domain they need to be changed and this is how a domain is registered.
The key point really is to understand the two different roles the domain registrar who register the domain with the registry and the hosting provider who host the zone for the domain on name servers.
Many companies do both but they are conceptually different.
At this point thanks for watching that's everything I wanted to cover about registering a domain so go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video, the third in the series, I want to talk about how DNS works and going to cover the structure and the flow of making a DNS query.
So let's get started.
Now I want to quickly touch upon the core functionality that DNS brings.
If we abstract away from all the technical details for a second, we have a person, device or service, it has a DNS name and it needs an IP address which provides services for that name.
So in this example we have www.netflix.com and we need the IP address or addresses which we can connect to in order to access Netflix.
So we're in the world, there is a DNS zone for Netflix.com which has the answer that you need.
It contains a record which links www.netflix.com to one or more IP addresses.
The issue is how do we find this zone?
Well that's what DNS does.
It's the job of DNS to allow you to locate the specific DNS zone which can provide you with an authoritative answer.
So you can query it and be provided with a response.
The IP address or addresses in this example which provide the services www.netflix.com.
Everything we talk about next, it's all part of the process to allow you to find the correct zone.
So DNS is a huge global distributed database containing lots of different DNS records and the function of DNS is to allow you to locate the specific zone which can give you an authoritative answer.
So let's step through how a query works within DNS.
In this example imagine that we're sitting at a computer and we're querying the www.netflix.com.
The first thing to be checked will be the local DNS cache and host file on the local machine.
The host file is a static mapping of names to IPs and overrides DNS and so it's checked first.
Then the machine and potentially the application being used might have some local DNS caching and so that's checked before we proceed with the query.
Assuming that the local client isn't aware of the DNS name that we're querying this is where we move on to the next step which is where we use a DNS resolver.
A resolver is a type of DNS server often running on a home router or within an internet provider and it will do the query on our behalf.
So we've sent the query to the resolver and now it takes over.
Now from the resolver's point of view first it also has a local cache which is used to speed up DNS queries and so if anyone else has queried www.netflix.com before it might be able to return a non-authoritative answer and remember it's non-authoritative because the only server which can give an authoritative answer is the one which is pointed at four given domain from the TLD zone of that domain.
Now since the resolver isn't that while it can cache results it will always return them as non-authoritative.
In most cases nobody will care but it's important to understand the distinction.
Now let's assume that there's no cache entry for www.netflix.com.
Well the next step is that the resolver queries the root zone via one of the root servers.
Every DNS server will have these ID addresses hard coded and this list is maintained by the operating system vendor.
The DNS root won't be able to answer us because it isn't aware of www.netflix.com but it can help us get one step closer.
The root zone contains records for dot com specifically name server records which point at the name servers for the dot com TLD.
This is how trust is created and how the root zone delegates control at the dot com TLD to verisign.
So the root servers will return the details at the dot com name servers.
Now this isn't exactly what we're looking for but it does bring us one step closer.
So the resolver can now query one at the dot com name servers for www.netflix.com.
Assuming that the netflix.com domain has been registered the dot com zone will contain entries for netflix.com.
This is how verisign delegates control of the netflix.com domain when the domain is registered and so the dot com name servers while they can't get the resolver the answer that it needs they can help it to move one step closer and so the details of the netflix.com name servers are returned to the resolver.
Well now the resolver can move on and so it queries the netflix.com name servers for www.netflix.com.
Because these name servers are authoritative for this domain because they host the zone and zone file for this domain and they're pointed at by the dot com TLD zone they can return an authoritative result to the query back to the resolver.
Now the resolver caches the result in order to improve performance for any of the same queries in future and it returns this result through to the client which is our machine.
This is how every DNS query works so maybe quicker if they're cached or longer if the full walking the tree process needs to occur.
The key facts to keep in mind are that firstly no one single name server has all the information not even the root name servers but and this is key to how DNS operates every query to every name server moves you one step closer to the answer.
The root gives you the dot com name servers the dot com name servers gives you a netflix.com name servers and the netflix.com name servers will be able to give you an authoritative result and this process end to end is called walking the tree.
Now this is the process from a high level but technically how does it look well let's check that out.
We start with the root zone and I've skipped a few steps to step four when the resolver is querying the root zone.
The root zone doesn't have the information needed but it does know which nameservers handle dot com and so it can provide this information but these are a subset of the name servers run by Verisign which manage the dot com TLD.
So these are the servers which host the dot com zone file so we can now query the dot com zone.
We can't get the answer directly from here but it does know which name servers are authoritative to netflix.com so these are the network addresses of the servers which host the netflix.com zone and this is authoritative so this will give us the answer that we need but in this case it's not an IP it's another DNS name this is a CNAME record and I'll talk about these in another video but what this means is that to get the IP address for this we have to follow the same process through again that's right many queries like this end with another DNS name which requires another query.
One of the many reasons an application can perform badly is if many DNS calls are used within the application and network performance impacts these in a negative way so this is how the walking the tree process works end to end this is how every DNS query works when you're trying to look up the IP address for a given DNS name this architecture and this flow is at the core of how most internet-based applications work from a DNS perspective and it's a really critical set of knowledge to understand fully but at this point that is everything I wanted to talk about in this video thanks for watching go ahead and complete the video and I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this video, which is part two of my DNS mini-series, I want to cover some of the reasons why DNS is structured in the way that it is.
Why do we need lots of DNS servers?
Why isn't one enough?
Additionally, at the end, I'm going to introduce some key DNS terms, and then introduce the hierarchical structure of DNS.
So let's jump in and get started.
Now, there are a few main problems with just having one DNS server, or even a small number of servers.
It's important that you understand these reasons, because it will help you understand why DNS is architected in the way that it is.
First, there's the obvious risk problem.
A small group of bad actors could attack the DNS infrastructure, and without much effort, prevented servicing genuine requests.
And that's something that we have to avoid for critical systems on the internet.
Also, we have a scaling problem.
Almost everyone who uses the internet globally uses DNS.
This represents a massive and growing load on the system.
A single server or a small group of servers can only get so big.
If every access is being made against one or a small group of servers, no matter what information is being requested, the system cannot scale.
Now, additionally, DNS is a huge database.
Current recent estimates predict that there are around 341 million domains, such as Netflix.com, Apple.com, and Twitter.com.
And each of those domains might have many records, tens, hundreds, thousands, or even more records in each of those domains.
And so this represents a huge data volume problem.
We can start with the amount of data, but then also need to take into consideration updates to that data, as well as consistency issues.
And all of that data is accessed by anyone using the internet on a constant basis.
Now, we can address the risk problem by creating more servers, each of them storing an exact copy of the same DNS data.
The more servers we have, the more load can be tolerated against those servers, and the less risk of attackers managing to take down the entire platform.
But this method doesn't really do anything about the scaling problem.
If every user of DNS communicates with any of the servers at random, it means that every server needs to hold the complete DNS dataset.
And this is a huge amount of data.
It's a huge global scale monolith, and this is something that we need to avoid.
Ideally, we also need to have the ability to delegate control over certain parts of the DNS dataset to other organizations so that they can manage it.
So UK domains, for example, should be managed by a UK entity.
US domains should be managed by somebody in the United States.gov, by the US government.au, by an organization in Australia, and so on.
And for this, we need a hierarchical structure.
I'm going to be talking about this fairly soon, but for now, I need to introduce some DNS terms.
Now, I need you not to switch off at this point in the video.
I'm going to be using some bullet points.
I hate bullet points, but these are worth it, so please stay with me.
The first term that I want to introduce is a DNS zone.
Think of this like a database.
So we have Netflix.com, and that's the zone.
Inside that zone are DNS records, for example, www.netflix.com, as well as many others.
Now, that zone is stored on a disk somewhere, and that's called a zone file.
Conceptually, this is a file containing a zone.
So there's a Netflix.com zone file somewhere on the internet containing the Netflix.com zone and all of its records.
We also have DNS nameservers, known as NS for short, and these are DNS servers which host one or more zones, and it does so by storing one or more zone files.
It's the nameserver or nameservers of the Netflix.com zone, which can answer queries that you have about the IP address of www.netflix.com.
Next, we have the term authoritative, and this just means that for a given domain, this is the real or genuine, or to put another way, the boss for this particular domain.
So there are one or more nameservers which can give authoritative answers for www.netflix.com.
These can be trusted.
They're the single source of truth for this particular zone.
And we also have the opposite of this, which is non-authoritative or cached, and this is where a DNS server might cache a zone or records to speed things up.
Your local router or internet provider might, for instance, be able to provide a non-authoritative answer for learn.control.io or youtube.com, because you've visited those sites before.
But only my nameservers can give an authoritative answer for learn.control.io.
Now that you know those terms, I want to introduce the architecture of DNS, so it's hierarchical structure.
And don't worry, this is high level only.
I'll be covering the detail of how DNS works and how it's used in a follow-up video.
So at this point, you know why a single DNS server is bad.
You also know why having many DNS servers is bad if they just saw the same monolithic set of data, and you understand a few important DNS terms.
What I'm going to step through now is a hierarchical design, which is the way that DNS works.
Using this architecture, you can split up the data which DNS stores and delegate the management of certain pieces to certain organizations.
Splitting the data makes it easier to manage and also splits the load.
If you're doing a query on netflix.com, you generally won't have to touch the infrastructure at the twitter.com.
Now DNS starts with the DNS root, and this is a zone like any other part of DNS, and this zone is hosted on DNS name servers also just like any other part of DNS.
So the DNS root zone runs on the DNS root servers.
The only special element of the root zone is that it's the point that every DNS client knows about and trusts.
It's where queries start at the root of DNS.
Now there are 13 root server IP addresses which host the root zone.
These IP addresses are distributed geographically, and the hardware is managed by independent organizations.
The internet corporation for assigned names and numbers, or ICANN, operates one of the 13 IP addresses which host the root zone.
And others include NASA, the University of Maryland, and Verisign.
So to be clear, these organizations manage the hardware for the 13 DNS root server IP addresses.
In reality, each of these 13 IP addresses represents many different servers using anycast IP addresses.
But from DNS' perspective, there are 13 root server IP addresses.
Now the root zone, remember, this is just a database.
This is managed by the internet assigned numbers authority known as IANA.
So they're responsible for the contents of the root zone.
So management of the root zone and management of the root servers which host the root zone is different.
Now the root zone doesn't store that much data.
What it does store is critical to how DNS functions, but there isn't that much data.
The root zone contains high level information on the top level domains or TLDs of DNS.
Now there are two types of TLD, generic TLDs such as .com and country codes specific ones such as .uk and .au.
IANA delegates the management of these TLDs to other organizations known as registries.
Now the job of the root zone really is just a point at these TLD registries.
So IANA delegates management at the .com TLD to Verisign, meaning Verisign is the .com registry.
And so in the root zone, there's an entry for .com pointing at the name servers which belong to Verisign.
Now .com is just one TLD.
There are other entries in the root zone for other TLDs and other TLDs could include .io, .uk and .au and many more.
Because the root zone points at these TLD zones, they're known as authoritative the source of truth for those TLDs.
This process where the root zone points at the name servers hosting the TLD zones, it establishes a chain of trust within DNS.
So to summarize, the root zone is pointing at the name servers hosting the TLD zones run by the registries which are the organizations who manage these TLDs.
So Verisign will operate some name servers hosting the .com TLD zone and the root zone will have records for the .com TLD which point at these .com name servers.
The .com zone which is just another DNS zone also contains some data, specifically high level data about domains which are within that .com TLD.
For example, the .com TLD zone contains some records with Twitter.com and Netflix.com, so records for domains which exist inside the .com zone.
The TLD only contains this high level information on domains within it, for example, Netflix.com.
It doesn't contain detailed records within these domains, for example, www.netflix.com, all the TLD contains is information on the domain itself.
Specifically, with this example, a set of records for Netflix.com which point at the name servers which host the Netflix.com zone.
Now it will also contain records for Twitter.com which point at the name servers which host the zone for Twitter.com as well as records for every other domain within the .com TLD.
Now these name servers, because they're pointed at from the layer above, they're authoritative for the domains, the zones that they host.
So the name servers for Netflix.com are authoritative for Netflix.com because the Netflix.com entry in the .com TLD points at these name servers.
Now these name servers host the zone for a given domain, for example, Netflix.com.
This means the servers host the zone file which stores the data for that zone.
At this level, the zone contains records within Netflix.com, so www.netflix.com which points at a set of IP addresses.
And because the zone and zone files are on these name servers and because these name servers are authoritative for the domain, these zones and zone files are also authoritative.
Now don't worry about understanding this in detail.
In the next video, I'm going to be walking through how this works in practice.
For now, all I need you to understand is that each layer of DNS from the root, the TLDs and the domain name servers they store.
[POP] Or only a small part of the DNS database.
The root zone knows which name servers the .com zone is on, the .com zone knows which name servers Netflix.com zone is on, and the Netflix.com zone contains records for the Netflix.com domain and can answer queries.
So this is the hierarchical architecture of DNS.
And in the next video, in this video series, I'm going to be stepping you through the flow of how DNS works and discussing the architecture at a more technical level.
But at this point, that's everything I'll be covering in this video.
So go ahead and complete the video.
And when you're ready, you can join me in the next video of this series.
-
-
learn.cantrill.io learn.cantrill.io
-
Well, welcome to the first video in this series where I want to help you understand DNS.
DNS is one of the core services on the internet, and it doesn't work, applications and other services will fail.
Now in this video series I'll be covering what DNS does, why it's structured the way that it is, how DNS works to get us answers to queries, and I'll finish up by covering some of its key limitations.
Now with that being said, let's jump in and get started.
Now before I cover how DNS works and why it works in the specific way that it does, I want you to be 100% sure of what functionality DNS provides.
Now when you access any website, you type the name into your browser, for example www.netflix.com.
Now you might imagine that the name is used to connect to the Netflix.com servers and stream your movie or TV show, but that's not actually how it or any internet app generally works.
Simply put, humans like names because they're easy to remember, but networks or servers not so much.
To communicate with Netflix, your computer and any networking in between needs the IP addresses of the Netflix servers.
DNS actually does many different things, but at its core it's like a big contact database.
In this context, it links names to IP addresses, so using DNS when accessing Netflix, we would ask DNS for the IP address of Netflix.
It would return the answer and then our device would use that IP address to connect over the internet to the Netflix servers.
So conceptually, the main piece of functionality which DNS provides is that it's a huge database which converts DNS names from Netflix.com into IP addresses.
Now so far, I hope that this makes sense.
At this point, it sounds just like a database and nothing complex, and you might be asking yourself why not just have one DNS server globally or a small collection of servers?
Now we're going to review that in the net.
Next video.
For this video, I just wanted to set the scene and make sure you understand exactly what functionality DNS provides.
DNS is critical.
Many large-scale failures on the internet are caused by either failed DNS or badly implemented DNS infrastructure.
If you want to be effective at designing or implementing cloud solutions or network-based applications, you have to understand DNS.
So if you are interested in knowing more, then go ahead and move to the next video, where I'll cover why DNS is structured in the way that it is.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about digital signing or digital signatures.
This is a process which you need to be familiar with to understand many other areas of IT such as DNFSEP or SSL certificates.
If you haven't watched my video on hashing it's linked in the description and you should pause this video and go and do that now because it's required knowledge for this video.
At this point though let's jump in and get started straight away.
Now before I cover how digital signatures work in detail I want to give you a quick refresher on public key cryptography.
With public key cryptography you generate two keys as part of a pair.
We have on the left the private key and this part is secret and should only ever be known by the owner.
Then we have the public key and this is not secret.
In fact anyone can know it or possess it.
It should be public and ideally as widely distributed as possible.
Now these keys are related, they're generated as part of one operation.
It means that you can take the public key which remember anyone has or anyone can get and use it to encrypt data which can then only be decrypted by the matching private key.
The data can't even be decrypted using the same public key which encrypted it so it allows you to securely send data to the owner of the private part of the public key.
What this architecture also allows is that you can take something and sign it using the private key and think of this just like encryption but where anybody who receives the data and has the public key can view the data and verify that the private key part was used to sign it and this provides away the evidence that you are in control of the private key used to sign some data since the private key is secret and since only you should have it.
This provides a way to establish a form of digital identity or digital validation.
This signing architecture is important because it forms the foundation of many other IT processes.
The process of adding digital signatures to data adds two main benefits when used in conjunction with hashing.
It verifies the integrity of the data that the data you have is what somebody produced and it verifies the authenticity so the data that you have is from a specific person.
So integrity is what and authenticity is who.
Together it means that you can download some data from Bob and verify that Bob produced the data and that it hasn't been modified.
Nobody can falsify data as being from Bob and nobody can alter the data without you being able to tell.
Now a key part of this process is that it's led on top of normal usage so if somebody doesn't care about integrity or authenticity they can access data as normal without any of these checks and to enable that the first step is to take a hash of the data that you're going to be validating.
The original data remains unchanged in plain text or whatever its original format is.
If you don't care about the integrity and authenticity then you can use the application or consume the data without worrying about any of this process.
Now this means that anybody having both the data and the hash knows that the data is the original data that Bob produced assuming that they trust the hash to be genuine and that's what digital signatures enable.
Next Bob signs the hash using his private key and this authenticates the hash to Bob.
Bob's public key can be distributed widely onto many locations that Bob controls and using this public key you can access the signed hash.
Because of this you know it came from Bob's private key i.e.
Bob and so the hash is now authenticated as being from Bob.
Nobody can falsify a fake hash because only one person Bob has Bob's private key.
So now we know the hash is from Bob we can verify that because we have Bob's public key.
We know the hash can't be changed because only Bob has Bob's private key and only the private key can sign anything and appear to be from that private key.
We now have this authenticated hash and we verified the integrity because of the private-public-key relationship.
We also have the original document and we know that it's authentic because if it was changed then the hash wouldn't match anymore.
So if we trust Bob's public key then we know that anything being signed by his private key is authentic because we know that Bob and Bob only have this private key then we trust the entity i.e.
Bob and because Bob can now digitally sign a hash we know that any data that we receive from the Bob is both authentic i.e.
Bob has authored this data and it's not been changed during transit or download.
So we have this chain of trust and this chain of trust using public key cryptographic signing and hashing forms a foundation and many things within id which you take for granted.
Okay so now that you understand the basic building blocks of this process let's step through this visually.
So step one is Bob and his private key.
Bob is the only person to have his private key but he's uploaded his public key to his website, his twitter account, various public key retro services and he includes a link to his public key in all of the emails that he sends.
Now if you look at all of these copies of his public key if all of them match then you can assume that they're valid and from Bob.
To exploit this you have to take over all of these services at the same time and change all mentions of his public key so the wider the distribution of the public key the easier it is to spot if any of them have been modified.
So let's say that Bob creates a contract to send it to a business partner and he wants others to be able to verify firstly that he sent it and that he wasn't altered in transit.
So the next step is that he puts the document through a hash function which results in a hash.
The hash as you've learned is unique to this document.
Know all the document can have this hash and any changes to this document also change this hash and you can't derive the document from the hash.
What we can't prove yet is that Bob created this hash or that the hash hasn't changed as part of faking the document.
So to fix this Bob uses his private key and he signs the hash and this creates a signature and he bundles this signature with the original document which creates a signed document.
So this signed document is the original document so the data plus a copy of the signed hash of that document and that data is going to be hosted somewhere so let's say that Bob uploads it to the internet or he emails the document to somebody or stores it from some cloud storage.
Now next one of Bob's business partners downloads the contracts of the signed data and the first thing is to get Bob's original hash and so to do that we take the signature and Bob's public key and that gives us back Bob's hash.
So now we know that this hash is signed by Bob and we know that Bob created this hash and this means that we know what Bob thought the hash of the document was.
We know the original state of the document when Bob generated the hash.
So we take the document and we hash it with the same hash function as Bob used.
So now we have our hash of the document and we have Bob's original hash verified to be from Bob.
Now these two hashes match you know that the document that you have is the document that Bob generated.
It hasn't been altered and you know that it originated from Bob because his hash was signed using his private key to generate the signature which is part of this document which is digitally signed and this is how hashing together with public key cryptography specifically signing can be used to verify authenticity and integrity.
Now Bob could have taken this a step further and encrypted all of this with a public key at the intended recipient and ensured that all of this process could happen in a fully encrypted way but encryption and signing are two slightly different things which are both enabled by the same public key cryptography architecture.
Now this point has everything I wanted to do.
I just wanted to give you a really high-level overview of how digital signatures can be used to verify both the integrity and the authenticity of data.
Now you're going to be using this knowledge as you learn about all that important IT processes so it's really important that you understand this end-to-end but at this point that is everything I wanted to cover in this video so go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about hashing, what it is and why we need it.
Now we do have a lot to cover so let's jump in and get started straight away.
What's simply hashing is a process where an algorithm is used to turn this or any other piece of data into this, a fixed length representation of that data.
Hashing is at the core of many critical services that you use today, such as passwords, digital signatures, SSL certificates, data searches, and even some antivirus or anti-malware solutions rely on hashes so they can store definitions of malicious files without having to store every single huge file.
We even have some forms of digital money such as Bitcoin which use hashing.
Now to understand why hashing is so important we need to step through how it works, what benefits it provides and some of the terminology used.
So let's go ahead and do that.
Hashing starts with a hash function and think of this as a piece of code and algorithm.
Now there are many different ones which are used or have been used over the years.
An example include MD5 and SHA-2 256.
The core principle of hashing is that you take some large variable sized data and you put that data through a hashing function and receive a fixed size hash of that data.
So whether the data is a text file, an image or a huge database file, the hash you receive will be tiny and fixed length based on the hashing function type.
Now also critically if you take some other data and again put it through the same hashing function you will get a different hash, a unique hash value.
Even if the data differs by only one byte, one character or one pixel it will result in a different hash.
The aim with hash functions is that any change no matter how minor will result in a different hash value.
Another critical part of hashing is that while getting a hash from some data is trivial what you cannot do is take a hash value and get the original data back.
There's no way to do this.
Let's say that you had a hash of an image.
Well you couldn't take the hash and derive the image.
You could if you had infinite processing power and infinite time brute force every image in existence to try and link back to the hash.
But this would require hashing every single image until you found the correct one.
You should view it as impossible with any modern hashing algorithm to derive the original data from a hash.
Without some vulnerability in the hashing function or without infinite time and processing power the hashing is one way only.
Data through to hash.
Now lastly another fundamental part of hashing is that given the same data you're always going to get the same hash value if you use the same hashing function.
So for one piece of data you get one hash value, for a different piece of data you get a different hash value and hashing is one way only.
And you should never get the same hash value for different data.
There are more on this in a second.
Let's look at an example of where hashing can be used which should be pretty familiar.
Imagine you're using an online service and they have a single server.
And when you create an account on that service you create a username and password and both of those are stored on that server.
When you log in you send your username and password to the server and an automated process checks the password that you send with the one that's stored and if they match you can log in.
Now even if you encrypt your password in transit and even if the password is encrypted when stored on the server your password still exists on that server.
And it means if the server is ever exploited then a bad actor will have access but worst to your password and at best an encrypted version of your password.
Assuming a full data dump or a long term exploit is pretty trivial to get the plain text version of your password in this way.
If you use that password on all the services those services are also at risk.
But what about if we use hashes?
Well with this architecture instead of sending a password to the server when signing up or signing in we send a hash of the password.
This means the server instead of having our actual password it only stores the hash of our password.
All the server needs to do is check that the hash that you send matches the one in its database and it can confirm the password was entered correctly on the local client.
Because given the same data in this case password and the same hashing algorithm you'll end up with the same hash value.
So by comparing the hash value it stores to the hash value that you deliver by the hash of your password it can check you're entering the correct password without ever storing a copy of your password.
Now in this example I've used the MD5 hashing algorithm but as you're going to say in a second this isn't super secure anymore so you'd likely need to use another hashing algorithm.
I'm going to use MD5 as an example to demonstrate a weakness of some algorithms.
So to stress you wouldn't generally use MD5 for production password systems you'd use something a lot more secure.
Now if this server was ever exploited when using password hashes this would be much safer because the hashes are one way you can't derive the password from a hash.
Nothing stops the attacker though from getting over and over again trying every possible word and phrase combination with the hashing algorithm used until it gets us right.
And then it has confirmed the password that you used and it can try and exploit all the services which you also make use of.
And this is why it's really important to use a modern secure hashing algorithm.
Now there are two things with hashing which are really bad.
One is if we were ever able to take a hash and derive the original data but as I mentioned earlier that's basically impossible to have a critical vulnerability in how hashing works.
Another major problem would be a collision.
An example of a collision is that if we take this image of a plane and if we hash this image and then if we take another image say this image of a shipwreck and we also hash this image we should have two different hash values.
If we do awesome.
If not if the hash of A on the left equals the hash of B on the right then bad things happen because we can no longer trust the hash function i.e. the hash algorithm.
And this is one of the reasons that MD5 has hashing algorithm is less trusted because collisions can happen.
We can actually show how they can be created, how data can be manipulated to cause collisions.
Now I've attached some links to this video with the research project showing how we can create those collisions.
But as a quick example I want to switch to my terminal and demonstrate how this works with these two images.
So in this folder on my local machine I have two images plane.jpg and ship.jpg and those represent the two images that you've just seen on your screen moments ago.
I'm going to go ahead and generate a hash value of one of these files and I'm going to use the MD5 hashing algorithm.
So I'm going to put MD5 space and then plane.jpg.
Go ahead and focus on the hashing value that's been generated so this hashing value in theory should uniquely represent the image plane.jpg.
So plane.jpg should always generate this hash value and if I repeat this command I get the same hash value.
But what should happen is if I generate a hash of another piece of data I should get a different hash value.
So I'm going to run MD5 again this time on ship.jpg.
Watch what happens in this case it's the same hash value and this is an example of a collision where two different pieces of data generate the same hash value.
And this is a weakness of MD5.
We can create this collision.
We can adjust the data in order to generate this collision and this is a bad thing.
This shouldn't happen.
Now I'm not to follow the same process but using a more secure hashing algorithm.
So this time I'm going to use the SHA-2 256 algorithm on the same file plane.jpg.
Now watch what happens now.
The hash value is longer because this is a different hashing algorithm but we confirm that this hash value is for plane.jpg.
Now I'm going to run the same hashing algorithm on ship.jpg.
This time note how it's a different hash value.
This is a much more secure hashing algorithm.
SHA-2-256 is much better at protecting against these collisions.
It's a more modern and more well trusted hashing algorithm.
Now just like any other form of security such as encryption it's important that you always use the latest and most secure protocols and architectures.
So if you're using hashing in production you should generally use something like SHA-2-256 because if you want to guarantee that one-to-one link between a piece of data and a hash so that any other piece of data generates a different hash you need to make sure you're using a well respected hashing algorithm such as SHA-2-256.
Now the likelihood of this happening in normal usage is nearly possible because these two images have actually been artificially adjusted to cause this collision but it does represent theoretical vulnerability in the MD5 hashing algorithm.
I've included links attached to this video which detail the search project and some examples of how you can implement this as a personal project if you want.
But at this point I'm going to go ahead and return to the remainder of this video.
Now just to summarise with hashing you take some data plus a hashing function and you generate a hash value.
Now you can't get the original data from a hash it is a one-way process and the same data should always generate the same hash.
Different data should always generate a different hash.
Now just demonstrated how you can artificially cause a collision using older hashing algorithms but in the real world even older algorithms should generate a different hash for different data and any modern hashing algorithm is protected against even this artificial process.
Now hashing can be used to verify downloaded data.
If you're making some data available to download you can have the download in one location and the hash of that download stored on a different system.
It means that you can download the data you can hash it and generate your own hash value.
If the hash value is matched then the downloaded data hasn't been altered.
If they differ then it means that what you have is not the same data as what was made available by the original author.
And this is a process that's very often used to verify sensitive applications.
So if you're downloading any form of application which stores sensitive data or operates on sensitive networks then you'll generally find that a hash will be made available by the author of that application and it can be used to verify that that download has not been adjusted.
Now in these type of security sensitive situations or if you're a security professional you also need to be sure that the hash itself hasn't been altered.
It's also whether the hash itself was generated by the person who claims to have generated that hash.
So if I make some software available to you and you download it you need to first check that the download hasn't been altered by hashing it yourself and comparing your hash to the hash that I publish.
But you also need to be sure that it was me publishing that hash and that the hash that you download hasn't been altered in some way.
And a way that this can be done is using digital signing or digital signatures and this is something that I cover in another video.
But at this point that's everything I wanted to cover in this video so go ahead and complete the video and be ready.
I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Now another process which uses asymmetric keys is signing.
Let's review an example.
The robot general wants to respond to the battle plan.
So let's say that the robot general has received the battle plans from the cap ruler, and he wants to confirm A that he's received them, and also that he agrees to them.
Remember, the battle plans require both sides to operate as one, and so the cap ruler needs to know that the robot general has received the plans, and that he agrees with them.
The general might want to respond with a simple OK message.
So we'll send the message saying OK to the cap ruler.
The issue is that anyone can encrypt messages to another party using asymmetric encryption.
Your eye could get hold of the cap ruler's public key and encrypt a message saying OK, and send it to the cap ruler.
And the cap ruler wouldn't necessarily be aware whether that was from the robot general or not.
Just because the cap ruler gets a message from what appears to be the robot general saying OK, it doesn't mean that it's actually from the robot general.
It could be from a human pretending to be the robot general.
Encryption does not prove identity.
But what we can use is a process called signing.
With signing, the robot general could write this OK message, and then he could take that message, and using his private key, he can sign that message.
Then that message can be sent across to the cap ruler.
And when the cap ruler receives that message, he can use the robot general's public key to prove whether that message was signed using the robot general's private key.
So this is the inverse.
On the previous example, I demonstrated how you can use the public key to encrypt data that can only be decrypted with the private key.
In this example, we can take the robot general's private key and sign a document.
And then the public key of the robot general can verify that that document was signed using its matching private key.
At no point is the private key revealed.
It's just because of this relationship between the public and private key.
The public key can verify whether its corresponding private key was used to sign something.
And so signing is generally used to verify identity.
As long as we can be sure that the public key belongs to the robot general, and generally this is done by the robot general uploading his public key to his Twitter account, or putting it on his cloud storage or his website.
As long as that verification exists, we can get the public key and verify that a document has indeed been signed by his private key because of that relationship between these two keys.
So key signing is generally used for ID verification and certain logon systems.
There's one more thing I want to talk about before I finish up this lesson.
And that's steganography.
Sometimes encryption isn't enough.
The problem with encryption is that if you use it, it's obvious that you've used it.
If you encrypt a file and deliver it to me, there isn't really any scope for denying that you have encrypted some form of data.
The government, who control the men and women with guns, can often insist that you decrypt the data.
And if you don't, well, they have plenty of sticks so they can put you in jail.
You can refuse to decrypt the data, but the deniability isn't there.
If you encrypt data, somebody will know that you've encrypted data.
Now, steganography is a process which addresses this.
If you've ever used invisible ink, the kind which only shows under a certain type of light or when heated, that's a physical form of steganography.
It's a method of hiding something in something else.
With steganography, the cap ruler could generate some ciphertext and hide it in a puppy image.
The image could be delivered to the robot general who knows to expect the image with some data inside and then extract the data.
To anyone else, it would just look like a puppy image.
And everybody knows there's no way that the cap ruler would send the robot channel a puppy image to this plausible deniability.
The effect of steganography might be a slightly larger file, but it would look almost identical.
Effective steganography algorithms make it almost impossible to find the hidden data unless you know a certain key, a number, or a pattern.
Steganography is just another layer of protection.
The way it works at a foundational level is pretty simple.
Let's say that we wanted to hide a simple message, just high.
Well, the decimal values for h and i are 8 and 9.
So we might take the puppy image and pick two random pixels and change the color by 8 and 9 respectively.
The steganography algorithm would take the original picture, selects the required number of pixels, adjust those pixels by a certain range of values.
And what it would generate as an output would be an almost identical puppy image.
But hidden there would be slight changes.
If you don't believe me, let's blow this up a little bit because this is an actual simple example of steganography.
If you look really closely at where those arrows are pointing, the color is slightly different than the background.
The first pixel has been adjusted by eight values and the second has been adjusted by nine values.
And so if you knew the location of these two pixels, you could take the second image and extract the text.
Now, this is a super simple example.
A real algorithm would be much more complex.
But this is at base level how the process works.
It allows you to embed data in another piece of data.
To be really secure, the cap roller would encrypt some data using the robot general's public key, take that cypher text, use steganography to embed it in an image that wouldn't be tied back to the cap roller, send this image to the robot general, and then the robot general could also use steganography to extract the piece of cypher text and decrypt it using his private key.
And the same process could be followed in reverse to signal an OK.
But the robot general, in addition to encrypting that OK, would also sign it so the cap roller would know that it came from the robot general.
With that being said, go ahead, complete this video.
When you're ready, you can join me the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this video where I want to talk at a high level about hardware security modules known as HSMs.
Now these are a really important type of device to understand both in general and especially if you currently work in or want to work in the security space because so many other things rely on them to function.
Now let's jump in and step through why we need them, what they are and how they work.
Now let's start by looking at the world without hardware security modules and we're going to do that with the example of a virtualized environment and so we have a VM host and this could be VMware, Zen or something within a cloud environment.
It doesn't matter for this example.
This means that we have some physical CPUs, memory as well as mechanical or solid state storage.
The reason on this is our hypervisor, a pair of operating systems and to keep this simple, a pair of applications.
Now, only to imagine that these applications, the operating systems and the hypervisor are all using encryption in some way.
That might be encryption at rest or in transit, it might be public key infrastructure or it might be simple SSL or TLS.
Whatever the requirement, it means you're going to have keys stored in many places, keys inside the applications, controlled by the operating system or held by the hypervisor.
And all this means that keys will be handled by the CPU, held in memory and stored on storage.
And over time, if you care about disaster recovery, you're going to have keys stored on various backups, some of which might go offsite for storage.
Using encryption means using keys and these keys will be stored or held in various places.
You might think this is controllable, but over time they will leave your premises and because of this, your directs control, meaning it becomes easier for these to fall into the wrong hands and become exploited.
Now, that's where HSMs add value.
So let's take a look.
Now, this is a similar architecture, the same hypervisor, the same set of operating systems and applications and the same backup infrastructure.
Only now, we've chosen to utilize a HSM.
A HSM or hardware security module, as the name suggests, is a separate device or cluster of devices.
It's isolated from your main infrastructure and it's inside this device that your keys are stored.
They never leave this device.
They're managed by this device, often generated and deleted by this device.
Anytime you want to perform cryptographic operations, you send those to the HSM together with the data.
The HSM performs cryptographic operations and sends the result back.
It means you keep the same application and virtualization architecture, but instead of having to generate, to manage, store and secure keys and risk those leaking, with HSM, the keys are securely held on device.
So that's HSMs at the high level.
Let's finish by exploring the architecture in more detail.
So now we have our HSM in the middle.
Think of this as 100% separated from the rest of our infrastructure, accessible only in a highly defined way.
Keys are created on the HSM, stored on the HSM, operations happen on the HSM and keys generally never leave the device.
By utilizing HSMs, you create a secure island within your infrastructure, where all cryptographic operations are controlled from.
The authentication takes place inside the device.
This means you have an isolated security blast radius.
Even if your corporate identity store is exploited, the identities used within the HSM are internally defined, and so can withstand this type of exploit.
HSMs are tamper-proof and they're hardened against physical and logical attacks.
The device is used secure on-glaves internally, which makes it almost impossible to gain access to the internal key material through direct physical means.
Many smartphones today come with a similar cut-down version of this.
It stores your biometric information to keep it isolated from any badly behaving software on your smartphone.
Access to cryptographic operations within the HSM is tightly controlled.
You need access, but assuming you have those access permissions, this access is still via industry-standard APIs, such as PKCS11, JCE and CryptoNG.
Nothing is directly accessible.
It's all controlled via APIs.
Now, there's even role separation for admins, so you can define people who can admin the HSM for things like software updates, key generation and other admin tasks, but those people might not be able to perform cryptographic operations.
Many HSMs are audited to some very stringent standards, such as those required for US government use, and it's this auditability, this access control, which makes them such a powerful type of device.
Examples of the types of situations where you might use a HSM, and this is a very small subset there are many others, but you might use it to off-load processing for SSL or TLS onto the HSM.
So if you have a fleet of web servers, you might have the HSM device perform heavy lifting on your behalf instead of the web servers.
HSMs often handle this in hardware using acceleration, so you gain the benefits of secure key management and the performance boost via off-loading.
You might also use HSMs for signing certificates for a private PKI infrastructure that you have within your business.
This just provides a way that you can securely manage the key material used to sign your certificates.
Now, I wanted to keep this video brief and just provide a very high-level introduction, because I'm going to be making many more videos in this series.
In order for those to make sense, you need to understand why HSMs are needed, what they do, and how they work at a high level.
And so that's what I covered in this video.
Now, I hope you've enjoyed it, but that's everything for now.
So go ahead and complete the video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this video where I want to talk about an encryption fundamental topic, envelope encryption.
Now this process is where you encrypt something with a key and then you encrypt that with another key.
Now to understand why we do this and what benefits it provides we need to look visually at the process.
So let's begin with that.
Let's look at a typical example of how envelope encryption might be implemented within AWS and one of its services, S3 or the simple storage service.
And we want to encrypt a huge number of private captaches.
Now we also want to follow the best practices and ensure that each of the captaches is encrypted using a different key.
That way if one key is leaked it only impacts that one single image.
This is an ideal use case for KMS.
KMS manages KMS keys which in a general sense are called key encryption keys because they're only used to encrypt data less than 4kb in size which generally means other encryption keys.
Now KMS has its own permissions.
You can't do anything with KMS unless you also have specific permissions to interact with KMS and any key encryption keys so KMS keys managed by the KMS product they stay inside KMS.
You can't access them directly.
KMS performs actions on data that you provide to it using those keys as long as you have permission to do so.
Now there's another type of key called a data encryption key or a deck.
Decks are created by KMS but not managed by KMS.
That's the responsibility of the system or person generating them.
Typically when using KMS you generate a deck using a KMS key and when that happens you get two copies of the deck returned.
A plain text version of that deck which can be used immediately to encrypt or decrypt data and an encrypted version also known as a wrapped version.
This encrypted version is encrypted using the KMS key which was picked when creating it.
That's to say that the wrapped data encryption key can only be encrypted using the KMS key known as a kek which was used when KMS created that data encryption key.
The next step in this process is the service S3 in this case which requested the data encryption key.
It uses the plain text version to encrypt the private data and then it discards the plain text deck.
The encrypted also known as the wrapped deck is stored alongside with the encrypted data.
Now this means we have a unique data encryption key per object that we encrypt and an encrypted version of that data encryption key is always stored alongside the object.
So we always have the deck available because it's stored with the object and we know that the deck is ever leaked.
One it's encrypted so it's useless without the ability to decrypt it and also if someone did manage to decrypt the data encryption key it would only be usable to decrypt that one single encrypted object.
Data encryption keys are almost always symmetric keys.
Why?
Because it's far faster to encrypt data with a symmetric key.
The key encryption keys in a generic sense can be asymmetric or symmetric with AWS which uses KMS.
The key encryption keys also known as KMS keys are symmetric.
The key thing to understand point in tendon is how using a wrap key architecture allows us to manage keys at scale.
KMS service only has in this architecture one key encryption key to manage that one kek could be used to generate millions of data encryption keys.
Each of them used to encrypt a single object.
Each encrypted object would be encrypted using a unique data encryption key.
What it also means is that the admin of the service can manage the storage of the object and the keys but can't decrypt either without permissions on the key encryption key.
Remember this uses KMS which has its own isolated permissions and the keys managed by KMS never leave KMS.
So you have to go via the product and you're subject to its permissions.
So this point let's have a look at the decryption flow.
We start this decryption process with KMS and the key encryption key that it manages.
We also have the ciphertext so the encrypted cat image that the service managers S3 and its example.
And we also have the wrapped or encrypted data encryption key or deck which is stored along with the encrypted object.
Now first to decrypt the encrypted object we need a plain text version of the data encryption key.
So a request is made to KMS to unwrap this deck meaning to decrypt it.
And since KMS doesn't manage the data encryption keys this key has to be passed to KMS as part of the request.
Now this is where the separate permissions of KMS are important because whatever the permissions of the service or entity wanting to decrypt the data the entity or service needs permissions to decrypt the deck.
If they're allowed KMS returns the decrypted deck and this key is then used to decrypt the ciphertext, the encrypted object giving us our cat image and then the data encryption key is discarded.
Now a benefit of this decryption approach aside from the permission separation is that again using symmetric keys is fast.
And also because we're only using KMS to decrypt the data encryption keys we only send small amounts of data to KMS and get small amounts of data back.
The actual data, the large objects are never passed around.
OK let's finish up with a few summary points.
First asymmetric keys are flexible.
The public part of an asymmetric key pair is public and can be widely distributed and this can be used to encrypt data that can only be decrypted using the private part of an asymmetric key.
But they're also slow.
It's fairly computationally heavy to encrypt using asymmetric keys.
To contrast this symmetric keys are fast but they're difficult to securely move because the same key is used to encrypt and decrypt.
These are not the type of thing that you can publish or otherwise exchange over non-encrypted mediums and then you end up in a catch-22 situation where you need to encrypt that key and then you have the same problem.
What are you going to use to exchange that key?
Envelope encryption can be the best of both if you use asymmetric key encryption keys.
Symmetric keys are generally used to encrypt or decrypt things when speed is a priority and generally these can be secured with asymmetric keys for flexibility.
When using KMS and AWS though, the key encryption keys are also symmetric but they're locked inside the product.
In either case by using KMS or another key management system together with envelope encryption you have less data to send to or receive from the key storage service.
And in addition to this, you need data encryption keys can be used per object which limits the blast radius if any of these keys are ever leaked or otherwise exposed.
At this point I'll let you see everything I wanted to cover in this envelope encryption fundamentals lesson.
Thanks for watching.
At this point I'll go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
Thank you.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this lesson where I want to provide a quick foundation into encryption.
Now I want to keep foundation lessons as short as possible so let's jump in and get started.
Before we get started though, I just want to cover the topics that we're going to go through in this lesson.
I'll be starting by talking about the different approaches to encryption, so encryption at rest and encryption in transit.
I'll follow up by talking about the different concepts, so the different components and how those fit together.
I'll cover symmetric encryption, asymmetric encryption, including the differences between those two, and I'll finish up the lesson talking about signing and then steganography.
Now I'll get started by talking about the different approaches to encryption, so we'll do that first.
There are two main approaches to encryption that you will see used within AWS and the real world in general.
Each of these is aimed at solving a very different problem.
First we've got encryption at rest and second encryption in transit.
Encryption at rest is designed to protect against physical theft and physical tampering, and a common example of this is a user with an encrypted laptop.
So Nat's using her laptop as she would with any other device, but her laptop is busy encrypting or scrambling any data that it writes to the internal storage, and then decrypting that data when it reads it from the same storage into memory.
Now there's a special piece of data that's used to encrypt and decrypt that data, and it's only known to Nat.
Now the proper word for this is secret.
Now with laptop encryption, this is either the password for the user logging into the laptop, or a piece of data that's derived from that, but in other types of encryption, it's more complex than that.
What this means though, is that if Nat's laptop is stolen or tampered with, the data is encrypted at rest without the information required to decrypt it.
It's useless to an attacker.
If somebody steals a laptop without the passcode that Nat uses, all they have is a laptop with encrypted or scrambled data, which is useless.
Encryption at rest is also used fairly commonly within cloud environments.
Your data is stored on shared hardware, and it's done so in an encrypted form.
Even if somebody else could find and access the base storage device that you were using, they couldn't access your data.
Encryption at rest is generally used where only one party is in this case, Nat, and that party is the only person who knows the encryption and encryption team.
The other approach to encryption is known as encryption in transit, and this is aimed at protecting data while it's being transferred between two places.
So when Nat is using her encryption data, the data is encrypted before it exits Nat's laptop, and decrypted by the bank when it arrives, and the same process is followed in reverse.
So the bank encrypts any data that's destined for Nat's laptop, and Nat's laptop performs the decryption process.
What you're essentially doing with encryption in transit is to apply an encryption wrapper, a tunnel, around the raw data, and anyone looking from the outside would just see a stream of scrambled data.
Encryption in transit is generally used when multiple individuals or systems are involved.
So let's move on and talk about encryption concepts.
In this part of the lesson, I want to introduce some encryption terms.
Not all of these are immediately intuitive, and so if you haven't heard of these before, I want to confirm your understanding because I'll be using them throughout the course.
Now we'll start with plaintext, and this is a horrible term to use for this thing, because the name gives you the impression that it's text data, and it isn't always.
Plaintext is unencrypted data.
It can be text, but it doesn't have to be.
It can also be images or even applications.
Plaintext is data that you can load into an application and use, or you can load and immediately read that data.
The next term is an algorithm, and an algorithm is a piece of code, or more specifically a piece of maths which takes plaintext and an encryption key, which I'll talk about shortly, and it generates encrypted data.
Now common examples of algorithms are Blowfish, AES, RC4, DES, RC5, and RC6.
When an algorithm is being used, it needs the plaintext, and it needs a key.
And a key is the next term I want to talk about.
A key at its simplest is a password, but it can be much more complex.
When an algorithm takes plaintext and a key, the output that it generates is ciphertext.
Now just like plaintext, ciphertext isn't always text data.
Ciphertext is just encrypted data.
So the relationship between all these things is that encryption, it takes plaintext, it uses an algorithm and a key, and it uses those things to create ciphertext.
Decryption is just the reverse.
It takes ciphertext, it takes a key, and it generates plaintext.
Now this is not all that complex at a high level, but like most things in tech, there are some details which you need to understand.
First I want to focus on the encryption key for the next part of the lesson.
The type of key influences how encryption is used.
So let's look at the different types of keys and different types of encryption.
The first type of encryption key that I want to talk about is a symmetric key.
Symmetric keys are used as part of a symmetric encryption process.
Now it's far easier to show you an example visually rather than just explain it.
So here goes.
Now as everybody knows at this point I'm a fan of animals, specifically cats.
What you might not know is I'm also a fan of robots.
And everybody knows that cats want to achieve world domination, and robots are working towards the robot apocalypse.
In this example, they've allied.
They created a plan for world domination.
So on the left we've got the cat supreme ruler, and on the right we've got the robot general.
Both leaders want to exchange data, their battle plans, and they want to do that without humans being able to read them in a plaintext form.
They need to ensure that the battle plans are only ever exchanged using ciphertext, so the humans never see the plaintext battle plans.
So step one is they agree on an algorithm to use, in this case AES 256.
And they set to work preparing to send the plaintext battle plans.
Now the cat ruler, because he's the party sending the data, he needs to generate a symmetric encryption key, so he needs to create that and keep it safe.
A symmetric encryption algorithm is used, and this accepts the key and the plaintext battle plans.
And once it's accepted both of those, it performs encryption and it outputs ciphertext, the encrypted battle plans.
The encrypted battle plans are now secure, because they're ciphertext and nobody can decipher them without the key.
They can be sent over any transmission method, even an insecure way to the robot general.
The encryption removes the risk of transmitting this data in the open, so even if we handed the ciphertext over to an untrustable party and asked for him to deliver that to the robot general, that would still be safe because the ciphertext is un-desyferable without the key.
But this is where the problem starts for our rulers.
The robot general doesn't have the key which was used to encrypt the data.
With symmetric encryption, the same key is used for both the encryption and the encryption processors.
So we need to find a way to get the robot general a copy of the key that was used to encrypt the data.
So how do we do that?
Well, we could transfer it electronically, but that's a bad idea because if the humans get the key, it's all over.
They can also decrypt the data.
We could arrange an in-person meetup, but for anything which is really sensitive, this is less than ideal because the people meeting to exchange the key could be intercepted on their way.
We could encrypt the encryption key and then transfer that key.
Now, that would be safe because the encryption key would be protected, but we'd still need to find a safe way of transferring the key that was used to encrypt the encryption key, and that gets really complex really quickly.
This is why symmetric encryption is great for things like local file encryption or disk encryption or lac-box, but not so useful for situations where the data needs to be transferred between two remote parties, because arranging the transit of the key is the problem, and generally we need to do that in advance so there is no delay in decrypting the data.
If the data that we're transferring is time-sensitive, the transit of the encryption key needs to happen in advance, and that's the most complex part of this method of encryption.
Now, if we did have a way to transfer the key securely, then the same algorithm would decrypt the data using the key and the ciphertext, and then we'd return the original plaintext battle plans.
But there's another way of doing it, and that's to use asymmetric encryption, and this addresses some of the problems that our rulers are facing.
It makes it much easier to exchange keys because the keys used in asymmetric encryption are themselves asymmetric.
Now, let's look at exactly what this means.
To use asymmetric encryption, the first stage is for the cap ruler and the robot channel to agree an asymmetric algorithm to use, and then create encryption keys for the algorithm, which logically enough will be asymmetric encryption keys.
Asymmetric encryption keys are formed of two parts, a public key and a private key.
For both sides to be able to send and receive to each other, then both sides would need to make both public and private keys.
To keep the diagram simple, we're going to use the example of where the cap ruler will be sending the battle plans to the robot channel, so only the robot channel in this scenario will need to generate any keys.
Now, a public key can be used to generate ciphertext, which can only be decrypted by the corresponding private key.
The public key cannot decrypt data that it was used to encrypt, only the private key can decrypt that data.
This means the private key needs to be guarded really carefully because it's what's used to decrypt data.
If it leaks, the battle plans could be compromised.
The public key, it's just used to encrypt, and so the robot general uploads his public key to his cloud storage so that anyone can access it.
The worst thing that could happen to anyone who obtains the robot general's public key is that he or she could use it to encrypt plaintext into ciphertext that only the robot general could decrypt.
So there's no downside to anyone getting hold of the robot general's public key.
So with asymmetric encryption, there's no requirement to exchange keys in advance.
As long as the robot general uploaded his public key to somewhere that was accessible to the world, then the first step would be for the cap ruler to download the robot general's public key.
Remember, this isn't sensitive.
Anyone can use it to encrypt data for the robot general, and that's it.
That's the only thing that the public key can do in this scenario.
So using the general's public key and the plaintext battle plans, the asymmetric algorithm would generate some ciphertext.
The ciphertext can then be transmitted to the robot general, and once received, only the robot general could decrypt that data.
This time, though, there's no key exchange required because the rulers are using asymmetric encryption.
The general already has his private key, and so he provides that private key and the ciphertext to the algorithm, which decrypts the ciphertext back into plaintext, and then the robot general has a copy of plaintext battle plans.
Asymmetric encryption is generally used where two or more parties are involved, and generally when those parties have never physically met before.
Issues by PTP, popular email and file encryption system.
Issues by SSL or TLS, which is a system for encrypting browser communications.
And issues by SSH, a popular method to securely access servers using key-based authentication.
Now, asymmetric encryption is computationally much more difficult to do than symmetric, and so many processors use asymmetric encryption to initially agree and communicate symmetric key, and then the symmetric key is used for communication between those two parties from that point onward.
Okay, so this is the end of part one of this lesson.
It was getting a little bit on the long side, and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead, complete video, and when you're ready, join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this video, I want to talk in general about application layer firewalls, also known as layer 7 firewalls, named after the layer of the OSI model that they operate at.
Now I want to keep this video pretty generic and talk about how AWS implement this within their product set in a separate video.
So let's jump in and get started.
Now before I talk about the high level architecture and features of layer 7 firewalls, let's quickly refresh our knowledge of layer 3, 4 and 5.
So we start with a layer 3 and 4 firewall, which is helping to secure the Categorum application.
Now this is accessed by millions of people globally because it's that amazing.
Now because this is layer 3 and 4, the firewall sees packets and segments, IP addresses and ports.
It sees two flows of communications, requests from the laptop to the server, and then responses from the server back to the laptop.
Because this firewall is limited to layer 3 and 4 only, these are viewed as separate and unrelated.
You need to think of these as different streams of data, request and response, even though they're part of the same communication from a human perspective.
Now if we enhance the firewall, this time adding session capability, then the same communication between the laptop and server can be viewed as one.
The firewall understands that the request and the response are part of the same session, and this small difference both reduces the admin overhead, so one rule instead of two, but this also lets you implement more contextual security, where you can think of response traffic in the context that it's response to an original request, and treat that differently than traffic in the same direction, which is not a response.
Now this next point is really important.
In both cases, these firewalls don't understand anything above the layer at which they operate.
The top firewall operates layer 3 and 4, so it understands layers 1, 2, 3 and 4.
The bottom firewall does this, plus layer 5.
Now what this means is that both of them can see IP addresses, ports, flags, and the bottom one can do all this, and additionally, it can understand sessions.
Neither of them though can understand the data which flows over the top of this.
They have no visibility into layer 7, for example, HTTP.
So they can't see headers or any of the other data that's been transferred over HTTP.
To them, the layer 7 stuff is opaque.
A cat image is the same as a dog image, is the same as some malware, and this is a significant limitation, and it exposes the things that we're protecting to a wide range of attacks.
Now layer 7 firewalls fix many of these limitations, so let's take a look at how.
Let's consider the same architecture where we have a client on the left, and then a server or application on the right that we're trying to protect.
In the middle we have a layer 7 firewall, and so that you'll remember it's a layer 7 firewall, let's add a robot, a smarter robot.
With this firewall, we still have the same flow of packets and segments, and a layer 7 firewall can understand all of the lower layers, but it adds additional capabilities.
Let's consider this example where the Categorum application is connected using a HTTPS connection.
So encrypted HTTP and HTTP is the layer 7 protocol.
The first important thing to realize is that layer 7 firewalls understand various layer 7 protocols, and the example we're stepping through is HTTP.
So they understand how that protocol transfers data, its architecture, headers, data, hosts, all the things which happen at layer 7 or below.
It also means that it can identify normal or abnormal elements of a layer 7 connection, which means it can protect against various protocol specific attacks or weaknesses.
In this example, so a HTTPS connection to the Categorum server, the HTTPS connection would be terminated on the layer 7 firewall.
So while the client thinks that it's connecting to the server, the HTTPS tunnel would be stripped away, leaving just HTTP, which it could analyze as it transits through the firewall.
So a new HTTPS connection would be created between the layer 7 firewall and the back end server.
So from the server and client's perspective, this process is occurring transparently.
The crucial part of this is that between the original HTTPS connection and the new HTTPS connection, the layer 7 firewall sees an un-imcripted HTTP connection.
So this is plain text, and because the firewall understands the layer 7 protocol, it can see and understand everything about this protocol stream.
Data at layer 7 can be inspected, blocked, replaced, or tagged, and this might be protecting against adult content, spam, off-topic content, or even malware.
So in this example, you might be looking to protect the integrity of the Categorum application.
You'll logically allow cat pictures, but might be less okay with doggoes.
You might draw a line and not allow other animals.
Sheik, for example, might be considered spam.
Maybe you're pretty open and inclusive and only block truly dangerous content such as malware and other exploits.
Because you can see and understand one or more application protocols, you can be very granular in how you allow or block content.
You can even replace content.
So if adult images flow through, these can be replaced with a nice kitten picture or other baby animals.
You can even block specific applications such as Facebook and even block the flow of business data leaving the organization onto services such as Dropbox.
The key thing to understand is that the layer 7 firewall keeps all of the layer 3, 4, and 5 features, but can react to the layer 7 elements.
This includes things like DNS names which are used, the rates of flow for county connections per second, you can even react to content or headers, whatever elements are contained in that specific layer 7 protocol which the firewall understands.
Now some layer 7 firewalls only understand HTTP, some understand SMTP which is the protocol used for email delivery.
The limit is only based on what the firewall software supports.
Now that's everything that I wanted to cover at a high level.
Coming up in future videos, I'm going to be covering how AWS implements layer 7 firewall capability into its product set.
For now though, this high level understanding is what I wanted to help with in this video.
So go ahead and complete the video.
Thanks for watching.
I'm already, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about Jumbo Frames as well as how they're supported within AWS.
Now let's jump in straight away and get started.
So what is a Jumbo Frame?
Well the maximum Ethernet V2 frame size is 1500 bytes.
That's what you should think about when you think of a standard frame.
Anything bigger than this is classified as a Jumbo Frame.
But generally when most people refer to Jumbo Frames they mean a frame with a maximum size of 9000 bytes.
Now let's visually explore what Jumbo Frames are and how they help improve performance in certain networking situations.
Imagine the situation where you have four EC2 instances, A, B, C and D.
Now A and B are connected and C and D are connected.
A and B are using standard frames to communicate and C and D are using Jumbo Frames.
All frames have a static part which is known as the frame overhead.
Regardless of the size of the frame this is largely a standard size.
And then there's the frame payload and this varies in size based on the data that the frame is carrying up to a maximum size.
For normal frames this is up to 1500 bytes.
For Jumbo Frames this maximum increases to 9000 bytes in size.
So this is what communications between incidents A and B might look like and remember they're using standard frames.
Well data is split between frames and the frames are transmitted on the shared medium and there are two things that you should consider.
First there's always this frame overhead regardless of the payload size.
And second there's always space between frames.
Time when nothing is being transmitted.
This is used as one method of demarcation between frames.
So with normal frames you have a high ratio of overhead to payload size and this is pretty inefficient.
You also have more frames because each frame can carry up to 1500 bytes.
This means more frames per amount of data and this means more space between frames.
And logically this means more wasted time on the medium.
Now compare this to how the same transfer might look with Jumbo Frames.
See the difference?
You have more payload data per frame.
This means that the frame overhead represents a much smaller component of the overall frame and this is more efficient.
Also larger frames mean more data per frame.
This means less frames for a given amount of data.
Less frames means you have less waste of the medium so the space between those frames and this is more efficient.
So that's why Jumbo Frames are good.
They're much more efficient when you're dealing with demanding network applications.
But as always there are some considerations that you need to be aware of.
First since Ethernet which is layer 2 generally carries higher level data inside the frames.
If you're communicating between two devices using IP for example then you need to make sure that every step supports the same size of Jumbo Frames.
Because if you don't you could have fragmentation.
Also not everything within AWS supports Jumbo Frames.
Which means that if you have a communication path where you do choose to use Jumbo Frames and part of it isn't compatible with Jumbo Frames then you will get fragmentation and that will cause you problems.
So these are all things to keep in mind if you do decide to use Jumbo Frames.
Now let's quickly step through some areas of AWS which do and don't support Jumbo Frames before we finish this lesson.
So first any traffic which is outside of a single VPC does not support Jumbo Frames.
Traffic over an inter-region VPC peering connection doesn't support Jumbo Frames.
Same region VPC peering is compatible so that's important to understand.
Traffic over a VPN does not support Jumbo Frames.
Traffic using an internet gateway does not support Jumbo Frames.
You are able to use Jumbo Frames over a direct connect and transit gateway can support Frames which are larger than usual but only up to 8,500 bytes.
So that's everything I wanted to cover in this lesson.
With that being said go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk in a little bit of detail about fiber optic cables.
If you're involved with networking in any way, then you need to be comfortable with how they work, their characteristics and the differences between the various types.
Now this matters for the real world and if you need to work with any physical networking services, including AWS Direct Connect.
Now let's just jump in and get started.
Fiber optic cables are an alternative way to transmit data versus copper cables.
Where copper cables use changes in electrical signals to transmit data over a copper medium, fiber optic cables use a thin glass or plastic core surrounded by various protective layers.
The core is about the diameter of a human hair.
The cable that you can see and touch is that core surrounded by a lot of protection.
If you just handle the core on its own, it would be pretty susceptible to damage.
Now fiber optic cables, as the name suggests, transmit light over the glass or plastic medium, so light over glass or plastic versus electrical signals over copper.
These unique elements mean that the cable can cover much larger distances and achieve much higher speeds versus copper.
At the time of creating this lesson, this can be in the regions of terabits per second.
Now fiber is also resistant to electromagnetic interference known as EMI and it's less prone to being impacted by water ingress into a space where the cables are being used.
In general, fiber offers a more consistent experience versus copper cable and so in modern networks, specifically those which require higher speeds and or larger distances, fiber is often preferred versus copper.
You're going to see over time fiber will gradually overtake copper in almost all wired networking situations as it becomes cheaper and easier to install.
It's already used for many global networking, metro networking and even many local area networking applications.
So that's important to understand.
It's going to be used more and more in the future.
Now in terms of physical makeup, this is what a typical fiber cable looks like externally.
There are two different things that you need to think about and this is common with any form of networking cable or any form of cable in general.
There's the cable choice which will influence the physical characteristics so how fast data can be transferred and over what distances.
Then you have the cable connectors and these generally affect what the cable can be connected to so linked to physical ports on the networking equipment but they can also influence some of the physical characteristics in terms of distance ability and speeds.
Now I'm not going to detail all the different fiber cable types in this lesson.
Instead I've included a link attached to this lesson which gives you a good overview of the common cable and connector types within the industry.
Now I want to spend a few minutes talking about the physical construction of fiber cables.
I've talked about how the connectors are different but it's important that you understand the different physical makeups of the cable itself.
Now when we're talking about fiber cable, you'll see it referred to using an X/Y notation.
For example, 9/125.
This defines two parts of the cable.
The first part is the diameter of the core in microns and the second part is the diameter of the cladding.
Now the first bit that surrounds the core.
Both of these are in microns and there are a thousand microns in a millimeter.
Now let's talk about the different components of a fiber cable and I'm mainly going to be covering the fiber core, the fiber cladding and the buffer.
And don't worry, the functions of each of these will make sense in a second.
Now we're going to start with the core and this is the part of the cable where the light is carried which allows for the transfer of data.
This part on a 9/125 cable is tinier.
It's only 9 microns across.
So if you look at the right of the screen, you have the core, then you have the transmitter receive optics at each side and the light flows through the core along the length of the cable.
I'll talk more about this in a moment but the light doesn't flow in a straight line.
Now we're bouncing off the inside edges of the core which is why the size of the core matters.
Now surrounding the core is the cladding and this is a material which has a lower refractive index versus the core.
And this means that it acts as a container to keep the light bouncing around inside the core.
Different types of fiber have different sizes of core and cladding and both of them radically impact the physical characteristics of the cable.
And this is where we move on to the less important parts of the cable.
The core and cladding were directly responsible for the physical transmission of data but now we're moving on to the protective balance.
So next we have the buffer and the buffer is the thing which adds strength to the cable.
The core and cladding are generally really good at helping to carry data but really bad at withstanding any physical shocks.
The buffer is a combination of coating and strengthening materials such as fibers made out of other materials.
Now don't confuse this type of fiber with fiber optic.
This is just a length of material which is designed to absorb shocks and give physical protection.
And this buffer is surrounded by the cable jacket which is the physical thing that you see when looking at the fiber cable.
It generally has a defined color such as green, orange or blue and this generally gives some indication on the overall capabilities of the cable.
Now I've included a link attached to this lesson which details the common colors and what they mean in terms of the fiber optic cable capabilities.
Now one more thing that I want to cover before we finish this lesson and that's the difference between single mode and multi mode fiber.
The difference might seem pretty nuanced but it's really important to understand.
Let's start with single mode.
Single mode generally has a really small core and it's often 8 to 9 microns in size.
And it generally uses a yellow jacket but this isn't always the case.
Now because of this tiny core, light generally follows a fairly single and straight path down at the core.
There's generally very little bounce and so very little distortion.
Single mode fiber generally uses lasers and so it's more expensive in terms of the optics versus multi mode.
Single mode because of this lack of distortion is great for long distances and it can achieve excellent speeds over these long distances.
Now it's not the fastest type of fiber cable but if you need a combination of high speeds and long distances then it's by far the best.
Single mode fiber can reach kilometers and can do really high speeds at those distances.
Generally in production usage this is 10 gig and above.
Single mode fiber cable itself is generally cheaper than multi mode fiber but the transceivers are things which send and receive light are more expensive versus multi mode.
But this is changing over time and this will probably mean more and more single mode usage within most business applications.
Now multi mode cable generally has a much bigger core and often uses either an orange and aqua or other coloured jacket.
The bigger core means that it can be used with a wider range of wavelengths of light generally at the same time.
For simplicity think of this as different colours of light travelling down the same fiber cable so these different colours can be sent at the same time and don't interfere with each other.
More light means more data so multi mode tends to be faster.
But that comes with a trade off because this leads to more distortion over the light over longer distances.
For that reason multi mode historically has been used for shorter cable runs where speed and cost effectiveness is required.
Now multi mode generally has cheaper LED based optics rather than the more expensive laser optics used within single mode fiber.
Multi mode fiber cable will generally use the prefix OM so OM2, OM3, OM4 and so on each improving the previous ones capabilities.
And multi mode as I mentioned before has a larger core.
At a high level the type of cable you decide on is determined by the distances that you need to transmit data and the speed.
So single mode is just more sturdy, there's less distortion and it can do better speeds over higher distances.
And as the optics prices come down I suspect more people will use single mode even for shorter distances.
Now one final thing I want to cover before we finish up with this lesson and that's fiber optic transceivers.
Now these are generally the things which you plug into networking equipment which allows the networking equipment to connect to fiber optic cables.
They're known as SFP transceiver modules also known as SFP or mini gibix and this stands for single form factor pluggable.
Now these are the things which generate and send or receives light to and from the fiber optic cable.
So these plug into networking equipment, these have optics inside which generate the light or can detect the light and these are used to translate from data to light and from lights to data that networking equipment can use.
Now these transceivers are either multi mode or single mode and they're optimised for a specific cable type.
So you generally buy a transceiver that's designed to be used with a certain type of cable and the transceivers will need to be the same type on both sides or both ends of the fiber optic cable.
Now when you're talking about the connector type and the cable you're generally going to see terms such as 1000 base LX, 10G base LR or 100G base LR4.
And these are often specified by vendors such as AWS to give you an idea on the type of cable and the connector that you need to use to plug into their equipment.
So in the case of AWS DirectConnect the supported types are 1000 base LX, 10G base LR and 100G base LR4.
Now at this point that's everything I wanted to cover at a high level about fiber optic cables and transceivers and once again I've included some links attached to this lesson which go into a little bit more detail if you're interested.
At this point that's everything I wanted to cover to go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to cover IPsec fundamentals.
So I want to talk about what IPsec is, why it matters, and how IPsec works at a fundamental level.
Now we have a lot of theory to cover, so let's jump in and get started.
At a foundational level, IPsec is a group of protocols which work together.
Their aim is to set up secure networking tunnels across insecure networks.
For example, connecting two secure networks or more specifically, their routers, called peers, across the public internet.
Now you might use this if you're a business with multiple sites, spread around geographically and want to connect them together, or if you have infrastructure in AWS or another cloud platform and want to connect to that infrastructure.
IPsec provides authentication so that only peers which are known to each other and can authenticate with each other can connect.
And any traffic which is carried by the IPsec protocols is encrypted, which means to one look at secure data which is being carried is ciphertext.
It can't be viewed and it can't be altered without being detected.
Now architecturally, it looks like this.
We have the public internet which is an insecure network full of goblins looking to steal your data.
Over this insecure network, we create IPsec tunnels between peers.
Now these tunnels exist as they're required.
Within IPsec VPNs, there's the concept of interesting traffic.
Now interesting traffic is simply traffic which matches certain rules, and these could be based on network prefixes or match more complex traffic types.
Regardless of the rules, if data matches any of those rules, it's classified as interesting traffic, and the VPN tunnel is created to carry traffic through to its destination.
Now if there's no interesting traffic, then tunnels are eventually torn down only to be reestablished when the system next detects interesting traffic.
The key thing to understand is that even though those tunnels use the public internet for transit, any data within the tunnels is encrypted while transiting over that insecure network.
It's protected.
Now to understand the nuance of what IPsec does, we need to refresh a few key pieces of knowledge.
In my fundamental section, I talked about the different types of encryption.
I mentioned symmetric and asymmetric encryption.
Now symmetric encryption is fast.
It's generally really easy to perform on any modern CPU and it has pretty low overhead.
But exchanging keys is a challenge.
The same keys are used to encrypt and decrypt.
So how can you get the key from one entity to another securely?
Do you transmit it in advance over a different medium, or do you encrypt it?
If so, you run into a catch-22 situation.
How do you securely transmit the encrypted key?
That's why asymmetric encryption is really valuable.
Now it's slower, so we don't want to be using it all the time, but it makes exchanging keys really simple because different keys are used for encryption and decryption.
Now a public key is used to encrypt data, and only the corresponding private key can decrypt that data.
And this means that you can safely exchange the public key while keeping the private key private.
So the aim of most protocols which handle the encryption of data over the internet is to start with asymmetric encryption, use this to securely exchange symmetric keys, and then use those for ongoing encryption.
Now I mentioned that because it will help you understand exactly how IPsec VPN works.
So let's go through it.
IPsec has two main phases.
If you work with VPNs, you're going to hear a lot of talk about phase one or phase two.
It's going to make sense why these are needed by the end of this lesson, but understand that there are two phases in setting up a given VPN connection.
The first is known as Ike phase one.
Ike or Internet Key Exchange, as the name suggests, is a protocol for how keys are exchanged in this context within a VPN.
There are two versions, Ike version one and Ike version two.
Version one logically is older, version two is newer and comes with more features.
Now you don't need to know all the detail right now, just understand that the protocol is about exchanging keys.
Ike phase one is the slow and heavy part of the process.
It's where you initially authenticate using a pre-shared key, so a password of sorts or a certificate.
It's where asymmetric encryption is used to agree on, create and share symmetric keys which are used in phase two.
The end of this phase is what's known as an Ike phase one tunnel or a security association known as an SA.
There's lots of jargon being thrown around, and I'll be showing you how this all works visually in just a moment.
But at the end of phase one, you have a phase one tunnel, and the heavy work of moving towards symmetric keys which can be used for encryption has been completed.
The next step is Ike phase two which is faster and much more agile because much of the heavy lifting has been done in phase one.
Technically, the phase one keys are used as a starting point for phase two.
Phase two is built on top of phase one and is concerned with agreeing encryption methods and the keys used for the bulk transfer of data.
The end result is an IPsec security association, a phase two tunnel which runs over phase one.
Now the reason why these different phases are split up is that it's possible for phase one to be established, then a phase two tunnel created, used and then torn down when no more interesting traffic occurs, but the phase one tunnel stays.
It means that establishing a new phase two tunnel is much faster and less work.
It's an elegant and well designed architecture, so let's look at how this all works together visually.
So this is Ike phase one.
The architecture is a simple one.
Two business sites, site one on the left with the user Bob and site two on the right with the user Julie, and in the middle, the public internet.
The very first step of this process is that the routers, the two peers at either side of this architecture need to authenticate.
Essentially prove their identity which is done either using certificates or pre-shared keys.
Now it's important to understand that this isn't yet about encryption, it's about proving identity.
Proving that both sides agree that the other side should be part of this VPN.
No keys are exchanged, it's just about identity.
Once the identity has been confirmed, then we move on to the next stage of Ike phase one.
In this stage we use a process called Diffie-Hellman Key Exchange.
Now again, I'm sorry about the jargon, but try your best to remember Diffie-Hellman known as DH.
What happens is that each side creates a Diffie-Hellman private key.
This key is used to decrypt data and to sign things.
You should remember this from the encryption fundamentals lesson.
In addition, each side uses that private key and derives a corresponding public key.
Now the public key can be used to encrypt data that only that private key can decrypt.
So at this point, each side has a private key as well as a corresponding public key.
At this point, these public keys are exchanged.
So Bob has Julie's public key and Julie has Bob's public key.
Remember, these public keys are not sensitive and can only be used normally to encrypt data for decryption by the corresponding private key.
The next stage of the process is actually really complicated mathematics, but at a fundamental level, each side takes its own private key and the public key at the other side and uses this to derive what's known as the Diffie-Hellman key.
This key is the same at both sides, but it's been independently generated.
Now again, the maths is something that's well beyond this lesson, but it's at the core of how this phase of VPN works.
And at this point, it's used to exchange other key material and agreements.
This part you can think of as a negotiation.
The result is that each side again independently uses this DH key plus the exchanged key material to generate a final phase one symmetrical key.
This key is what's used to encrypt anything passing through a phase one tunnel known as the Ike Security Association.
Now, if that process seems slow and heavy, that's because it is.
It's both complex and in some ways simplistically elegant at the same time, but it means that both sides have the same symmetric key without that ever having to be passed between them.
And the phase ends with this security association in place and this can be used at phase two.
So let's talk about that next.
So in phase two, we have a few things.
First, a DH key on both sides and the same phase one symmetric key also on both sides.
And then finally, the established phase one tunnel.
During this phase, both of the peers are wanting to agree how the VPN itself will be constructed.
The previous phase was about allowing this exchanging keys and allowing the peers to communicate.
This phase, so Ike phase two, is about getting the VPN up and running, being in a position to encrypt data.
So agreeing how, when and what.
So the first part of this is that the symmetric key is used to encrypt and decrypt agreements and pass more key material between the peers.
The idea is that one peer is informing the other about the range of cybersuits that it supports, basically encryption methods which it can perform.
The other peer, in this example, the right one, will then pick the best shared one.
So the best method which it also supports and it will let the left peer know.
And this becomes the agreed method of communication.
Next, the DH key and the key material exchanged above is used to create a new key, a symmetrical IP set key.
This is a key which is designed for large-scale data transfer.
It's an efficient and secure algorithm.
And the specific one is based on the negotiation which happened above in steps one and two at this phase.
So it's this key which is used for the encryption and decryption of interesting traffic across the VPN tunnel.
Across each phase one tunnel, you actually have a pair of security associations.
One from right to left and one from left to right.
And these are the security associations which are used to transfer the data between networks at either side of a VPN.
Now there are actually two different types of VPN which you need to understand.
Policy-based VPNs and route-based VPNs.
The difference is how they match interesting traffic.
Remember, this is the traffic which gets sent over a VPN.
So with policy-based VPNs, there are rules created which match traffic.
And based on this rule, traffic is sent over a pair of security associations.
One which is used for each direction of traffic.
It means that you can have different rules for different types of traffic.
Something which is great for more rigorous security environments.
Now the other type of VPN are route-based VPNs.
And these do target matching based on prefix.
For example, send traffic for 192.168.0.0/24 over this VPN.
With this type of VPN, you have a single pair of security associations for each network prefix.
This means all traffic types between those networks use the same pair of security associations.
Now this provides less functionality which is much simpler to set up.
To illustrate the differences between route-based and policy-based VPNs, it's probably worth looking visually at the phase one and phase two architectures.
Let's start with a simple route-based VPN.
The phase one tunnel is established using a phase one tunnel key.
Now assuming that we're using a route-based VPN, then a single pair of security associations is created.
One in each direction using a single IPsec key.
So this means that we have a pair of security associations, essentially a single phase two tunnel, running over the phase one tunnel.
That phase two or IPsec tunnel, which is how we talk about the pair of security associations, can be dropped when there is no more interesting traffic and recreated again on top of the same phase one tunnel when new traffic is detected.
But the key thing to understand is that there's one phase one tunnel running one phase two tunnel based on routes.
Running a policy-based VPN is different.
We still have the same phase one tunnel, but over the top of this, each policy match uses an SA pair with a unique IPsec key.
And this allows us to have for the same network different security settings for different types of traffic.
In this example, infrastructure at the top, CCTV in the middle and financial systems at the bottom.
So policy-based VPNs are more difficult to configure, but do provide much more flexibility when it comes to using different security settings for different types of traffic.
Now that, at a very high level, is how VPNs functions, so the security architecture, how everything interacts with everything else.
But for now, that's everything that I wanted to cover.
So go ahead and complete this video, and then when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back in this video, I want to cover the differences between stateful and stateless firewalls.
And to do that, I need to refresh your knowledge of how TCP and IP function.
So let's just jump in and get started.
In the networking fundamentals videos, I talk about how TCP and IP worked together.
You might already know this if you have networking experience in the real world, but when you make a connection using TCP, what's actually happening is that each side is sending IP packets to each other.
These IP packets have a source and destination IP, and are carried across local networks and the public internet.
Now TCP is a layer 4 protocol which runs on top of IP.
It adds error correction together with the idea of ports.
So HTTP runs on TCP port 80 and HTTPS runs on TCP port 443 and so on.
So keep that in mind as we continue talking about the state of connections.
So let's say that we have a user here on the left, Bob, and he's connecting to the Categoram application running on a server on the right.
What most people imagine in this scenario is a single connection between Bob's laptop and the server.
So Bob's connecting to TCP port 443 on the server, and in doing so, he gets information back.
In this case, many different cat images.
Now you know that below the surface of layer 3, this single connection is handled by exchanging packets between the source and the destination.
Conceptually though, you can imagine that each connection, in this case, is an outgoing connection from Bob's laptop to the server.
Each one of these is actually made up of two different parts.
First, we've got the request part where the client requests some information from the server, in this case on cat images, and then we have the response part where that data is returned to the client.
Now these are both parts of the same interaction between the client and server, but strictly speaking, you can think of these as two different components.
What actually happens as part of this connection setup is this.
First, the client picks a temporary port, and this is known as an ephemeral port.
Now typically this port has a value between 1024 and 65535, but this range is dependent on the operating system which Bob's laptop is using.
Then once this ephemeral port is chosen, the client initiates a connection to the server using a well-known port number.
Now a well-known port number is a port number which is typically associated with one specific popular application or protocol.
In this case, TCP443 is HTTPS.
So this is the request part of the connection.
It's a stream of data to the server.
You're asking for something, some cat pictures or a web page.
Next, the server responds back with the actual data.
The server connects back to the source IP of the request part, in this case Bob's laptop, and it connects to the source port of the request part, which is the ephemeral port which Bob's laptop has chosen.
This part is known as the response.
So the request is from Bob's laptop using an ephemeral port to a server using a well-known port.
The response is from the server on that well-known port, but Bob's laptop on the ephemeral port.
Now it's these values which uniquely identify a single connection.
So that's a source port and source IP, and a destination IP, and a destination port.
Now I hope that this makes sense so far.
If not, then you need to repeat this first part of the video again, because this is really important to understand.
If it does make sense, then let's carry on.
Now let's look at this example in a little bit more detail.
This is the same connection that we looked at on the previous screen.
We have Bob's laptop on the left and a Caterpillar on the right.
Obviously the left is the client and the right is the server.
I also introduced the correct terms on the previous screen, so request and response.
So the first part is the client talking to the server, asking for something, and that's the request, and the second part is the server responding, and that's the response.
But what I want to get you used to is that the directionality depends on your perspective, and let me explain what I mean.
So in this case, the client initiates the request, and I've added the IP addresses on here for both the client and the server.
So what this means is the packets will be sent from the client to the server, and these will be flowing from left to right.
These packets are going to have a source IP address of 119.18.36.73, which is the IP address of the client, so Bob's laptop, and they will have a destination IP of 1.3.3.7, which is the IP address of the server.
Now the source port will be a temporary or ephemeral port chosen by the client, and the destination port will be a well-known port.
In this case, we're using HTTPS, so TCP port 443.
Now if I challenge you to take a quick guess, would you say that this request is outbound or inbound?
If you had to pick, if you had to define a firewall rule right now, would you pick inbound or outbound?
Well, this is actually a trick question, because it's both.
From the client perspective, this request is an outbound connection.
So if you're adding a firewall rule on the client, you would be looking to allow or deny an outbound connection.
From the server perspective, though, it's an inbound connection, so you have to think about perspective when you're working with firewalls.
But then we have the response part from the server through to the client.
This will also be a collection of packets moving from right to left.
This time, the source IP on those packets will be 1.3.3.7, which is the IP address of the server.
The destination IP will be 119.18.36.73, which is the IP address of the client, so Bob's Laptop.
The source port will be TCP port 443, which is the well-known port of HTTPS, and the destination port will be the ephemeral port chosen originally by the client.
Now again, I want you to think about the directionality of this component of the communication.
Is it outbound or inbound?
Well, again, it depends on perspective.
The server sees it as an outbound connection from the server to the client, and the client sees it as an inbound connection from the server to itself.
Now, this is really important because there are two things to think about when dealing with firewall rules.
The first is that each connection between a client and a server has two components, the request and the response.
So the request is from a client to a server, and the response is from a server to a client.
The response is always the inverse direction to the request.
But the direction of the request isn't always outbound and isn't always inbound.
It depends on what that data is together with your perspective.
And that's what I want to talk about a bit more on the next screen.
Let's look at this more complex example.
We still have Bob and his laptop and the Catergram server, but now we have a software update server on the bottom left.
Now, the Catergram server is inside a subnet which is protected by a firewall, and specifically, this is a stateless firewall.
A stateless firewall means that it doesn't understand the state of connections.
What this means is that it sees the request connection from Bob's laptop to Catergram, and the response of from a Catergram to Bob's laptop as two individual parts.
You need to think about allowing or denying them as two parts.
You need two rules.
In this case, one inbound rule which is the request and one outbound rule for the response.
This is obviously more management overhead.
Two rules needed for each thing.
Each thing which you as a human see as one connection.
But it gets slightly more confusing than that.
For connections to the Catergram server, so for example, when Bob's laptop is making a request, then that request is inbound to the Catergram server.
The response, logically enough, is outbound, sending data back to Bob's laptop, but it's possible to have the inverse.
Consider the situation where the Catergram server is performing software updates.
Well, in this situation, the request will be from the Catergram server to the software update server, so outbound, and the response will be from the software update server to the Catergram server, so this is inbound.
So when you're thinking about this, start with the request.
Is the request coming to you or going to somewhere else?
The response will always be in the reverse direction.
So this situation also requires two firewall rules.
One outbound for the request and one inbound for the response.
Now, there are two really important points I want to make about stateless firewalls.
First, for any servers where they accept connections and where they initiate connections, and this is common with web servers which need to accept connections from clients, but where they also need to do software updates.
In this situation, you'll have to deal with two rules for each of these, and they will need to be the inverse of each other.
So get used to thinking that outbound rules can be both the request and the response, and inbound rules can also be the request and the response.
It's initially confusing, but just remember, start by determining the direction of the request, and then always keep in mind that with stateless firewalls, you're going to need an inverse rule for the response.
Now, the second important thing is that the request component is always going to be to a well-known port.
If you're managing the firewall for the Catergram application, you'll need to allow connections to TCP 443.
The response, though, is always from the server to a client, but this always uses a random ephemeral port, because the firewall is stateless, it has no way of knowing which specific port is used for the response, so you'll often have to allow the full range of ephemeral ports to any destination.
This makes security engineers uneasy, which is why stateful firewalls, which I'll be talking about next, are much better.
Just focus on these two key elements that every connection has a request and a response, and together with those, keep in mind the fact that they can both be in either direction, so a request can be inbound or outbound, and a response will always be the inverse to the directionality of the request.
Also, you'll need to keep in mind that any rules that you create for the response will need to often allow the full range of ephemeral ports.
That's not a problem with stateful firewalls, which I want to cover next.
So we're going to use the same architecture.
We've got Bob's laptop on the top left, the Catergram server on the middle right, and the Software Update server on the bottom left.
A stateful firewall is intelligent enough to identify the response for a given request, since the ports and IPs are the same, it can link one to the other, and this means that for a specific request to Catergram from Bob's laptop to the server, the firewall automatically knows which data is the response, and the same is true for software updates.
For a given connection to a software update server, the request, the firewall is smart enough to be able to see the response or the return data from the software update server back to the Catergram server, and this means that with a stateful firewall, you'll generally only have to allow the request or not, and the response will be allowed or not automatically.
This significantly reduces the admin overhead and the chance for mistakes, because you just have to think in terms of the directionality and the IPs and ports of the request, and it handles everything else.
In addition, you don't need to allow the full ephemeral port range, because the firewall can identify which port is being used, and implicitly allow it, based on it being the response to a request that you allow.
Okay, so that's how stateful and stateful firewalls work, and I know it's been a little bit abstract, but this has been intentional, because I want you to understand how they work, and sexually, before I go into more detail with regards to how AWS implements both of these different security firewall standards.
Now, at this point, I've finished with the abstract descriptions, so go ahead and finish this video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back in this lesson I'm going to be talking about the border gateway protocol known as BGP.
Now BGP is a routing protocol and that means that it's a protocol which is used to control how data flows from point A through point B and C and arrives at the destination point B.
So let's jump in and get started.
BGP as a system is made up of lots of self-managing networks known as autonomous systems or AS.
Now an AS could be a large network, it could be a collection of routers, but in either case they're controlled by one single entity.
From a BGP perspective it's viewed as a black box, an abstraction away from the detail which BGP doesn't need.
Now you might have an enterprise network with lots of routers and complex internal routing, but all BGP needs to be aware of is your network as a whole.
So autonomous systems are black boxes which abstract away from the detail and only concern themselves with network routing in and out of your autonomous system.
Now each autonomous system is allocated a number by IANA, the Internet Assigned Numbers Authority.
Generally ASNs are 16 bits in length and range from 0 through to 65,535.
Now most of that range are public ASNs which are directly allocated by IANA.
However the range from 64,512 to 65,534 are private and can be utilized within private peering arrangements without being officially allocated.
Now you can get 32-bit ASNs, but this lesson will focus on 16-bit ones, we're only covering the basic architecture of BGP.
Now ASNs are autonomous system numbers are the way that BGP identifies different entities within the network, so different peers.
It's the way that BGP can distinguish between your network or your ASN and my network.
BGP is designed to be reliable and distributed and it operates over TCP using port 179 and so it includes error correction and flow control to ensure that all parties can communicate reliably.
It isn't however automatic, you have to manually create a peering relationship, a BGP relationship between two different autonomous systems.
And once done those two autonomous systems can communicate what they know about network topology.
Now a given autonomous system will learn about networks from any of the peering relationships that it has and anything that it learns it will communicate out to any of its other peers.
And so because of the peering relationship structure you rapidly build up a larger BGP network where each individual autonomous system is exchanging network topology information.
And that's how the internet functions from a routing perspective.
All of the major core networks are busy exchanging routing and topology information between each other.
Now BGP is what's known as a path vector protocol and this means that it exchanges the best path to a destination between peers.
It doesn't exchange every path only the best path that a given autonomous system is aware of and that path is known as an AS path and autonomous system path.
Now BGP doesn't take into account link speed or condition, it focuses on paths.
For example, can we get from A to D using A, B, C and D or is there a direct link between A and D?
It's BGP's responsibility to build up this network topology map and allow the exchange between different autonomous systems.
Now while working with AWS or integrating AWS networks with more complex hybrid architectures, you might see the terms I, BGP or E, BGP.
Now I, BGP focuses on routing within an autonomous system and E, BGP focuses on routing between autonomous systems.
And this lesson will focus on BGP as it relates to routing between autonomous systems because that's the type that tends to be used most often with AWS.
Now I need to stress at this point that this lesson is not a deep dive into BGP.
All I need you to understand at this point is the high level architecture so that you can make sense of how it's used within AWS.
So let's look at this visually and hopefully it will make more sense.
So I want to step through an example of a fairly common BGP style topology.
So this is Australia, the land of crocodiles and kangaroos.
And in this example we have three major metro areas.
We have Brisbane on the east and this has an IP address range of 10.16.0.0/16 and the router is using the IP of 10.16.0.1 and this has an autonomous system number of 200.
We have Adelaide on the south coast using a network range of 10.17.0.0/16 and the router is using 10.17.0.1 and this has an autonomous system number of 201.
And then finally between the two in the middle of Australia we have Alice Springs using the network 10.18.0.0/16.
The router uses 10.18.0.1 and the autonomous system number is 202.
Now between Brisbane and Adelaide and between Adelaide and Alice Springs is a 1 gigabit fiberlink and then connecting Brisbane and Alice Springs is a 5 megabit satellite connection with an unlimited data cap.
BGP at its foundation is designed to exchange network topology and it does this by exchanging paths between autonomous systems.
So let's step through an example of how this might look using this network structure and we start at the top right with Brisbane.
And this is how the route table for Brisbane might look at this point.
The route table contains the destination, in this case we only have the one route and it's the local network for Brisbane.
The next column in the route table is the next hop so what IP address is needed is the first or next hop to get to that network and 0.0.0.0 in this case means that it's locally connected and this is because it's the local network that exists in the Brisbane site.
And then finally we have the AS path which is the autonomous system path and this shows the path of the way to get from one autonomous system to another and the I in this case means that it's the origin so it's this network.
Now the two other locations will have a similar route table at this stage so Adelaide will have one for 10.17.0.0 and Alice Springs will have one for 10.18.0.0.0/16 and both of those will have 0.0.0.0.0 as the next hop and I for the AS path because they're all local networks.
So each of these autonomous systems so 200, 201 and 202 can have peering relationships configured so let's assume that we've linked all three so Brisbane and Alice Springs, Alice Springs and Adelaide and then finally Adelaide and Brisbane.
Each of those peers will exchange the best paths that they have to a destination with each other so Adelaide will send Brisbane the networks that it knows about and at this point it's only itself and what it does when it exchanges this or when it advertises this is it pre-pens its AS number onto the path.
So Brisbane now knows that to get to the 10.17.0.0 network it needs to send the data to 10.17.0.1 and because of the AS path it knows that it goes through autonomous system 201 which is Adelaide and then it reaches the origin or I and so it knows the data only has to go through one autonomous system to reach its final destination.
Now in addition to this Brisbane will also receive an additional path advertised from Alice Springs in this case over the satellite connection and Alice Springs propends its AS number 202 onto that path so Brisbane knows to get to the 10.18.0.0/16 network.
The next hop is 10.18.0.1 which is the Alice Springs router and it needs to go via the 202 autonomous system number which belongs to Alice Springs.
So at this point Brisbane knows about both of the other autonomous systems and it's able to reach both of them from a routing perspective.
Now in addition to that Adelaide will also learn about the Brisbane autonomous system because it has a peering relationship with the Brisbane autonomous system and in addition Adelaide will also in the same way learn about the network in Alice Springs because it also has a peering relationship with the Alice Springs ASN 202.
And then finally because Alice Springs also has BGP peering relationships between it and both of the other autonomous systems it will also learn about the Brisbane autonomous system and the Adelaide autonomous system.
At this point all three networks are able to route traffic to the other two so if you look at the route table for Alice Springs it knows how to get to the 10.16 and 10.17 networks via the ASN of 202.1 respectively.
All three autonomous systems can talk to both of the others and this has all been configured automatically once those BGP peering relationships were set up between each of the autonomous systems but it doesn't stop there.
This is a ring network and so there are two ways to get to every other network clockwise and anticlockwise.
Adelaide is aware of how to get to Alice Springs so ASN 202 because it's directly connected to that and so it will advertise this to Brisbane pre-pending its own ASN onto the AS path and so Brisbane can now reach Alice Springs via Adelaide.
So using the 201 and then 202 AS path.
Notice how the next hop for the route given to Brisbane is the Adelaide router so 10.17.0.1 and so if we use this route table entry the traffic would go first to Adelaide and then forward it on to Alice Springs.
Likewise Adelaide is aware of Brisbane and so it will advertise that to Alice Springs pre-pending its own AS number onto the AS path.
So notice how this new route on the Alice Springs route table the one for 10.16.0/16 is going via Adelaide so 10.17.0.1.
The AS path is 201 which is Adelaide, 200 which is Brisbane and then the origin.
Now lastly Adelaide will also learn an additional route to Alice Springs but this time via Brisbane and Brisbane would pre-pend its own ASN onto the AS path so in this case we've got the additional route at the bottom for 10.18.0.0/16 but the next hop is Brisbane 10.16.0.1 and the AS path is 200 which is Brisbane and then 202 which is Alice Springs and then we've got the origin.
Autonomous systems advertise the shortest route that they're aware of to any other autonomous systems that they have peering relationships with.
Now at this point we're in a situation where we actually have a fully highly available network with paths to every single network.
If any of these three sites failed then BGP would be aware of the route to the working sites.
Notice that the indirect routes that I've highlighted in blue at the bottom of each route table have a longer AS path.
These are none that are preferred because it's not the shortest path to the destination.
So Brisbane for example if it was sending traffic to Alice Springs it would use the shorter path, the direct satellite connection.
By default BGP always uses the shorter path as the preferred one.
Now there are situations where you want to influence which path is used to reach a given network.
Imagine that you're the network administrator for the Alice Springs network.
Now that autonomous system has two networking connections.
The fiber connection coming from Adelaide and the satellite connection between it and Brisbane.
Now ideally you want to ensure that the satellite connection is only ever used as a backup when absolutely required.
And that's for two reasons.
Firstly it's a slower connection.
It only operates at 5 megabits.
And also because it's a satellite connection it will suffer from significantly higher latencies than the fiber connection between Alice Springs and Adelaide and then Adelaide and Brisbane.
Now because BGP doesn't take into account performance or condition the satellite connection because it's the shortest path will always be used for any communications between Alice Springs and Brisbane.
But you are able to use a technique called AS path prepending which means that you can configure BGP at Alice Springs to make the satellite link look worse than it actually is.
And you do this by adding additional autonomous system numbers to the path.
You make it appear to be longer than it physically is.
Remember BGP decides everything based on path length.
And so by artificially lengthening the path between Alice Springs and Brisbane it means that Brisbane will learn a new route.
The old one will be removed.
And so the new shortest path between Brisbane and Alice Springs will be the one highlighted in blue at the bottom of the Brisbane route table.
This one will be seen a shorter than the artificially extended one using AS path prepending.
And so now all of the data between Brisbane and Alice Springs will go via the fiber link from Brisbane through Adelaide and finally to Alice Springs.
BGP thinks that the path from Brisbane to Alice Springs directly over the satellite connection has three hops versus the two hops for the fiber connection via Adelaide.
And so this one will always be preferred.
So in summary, a BGP autonomous system advertises the shortest path to a destination that it's aware of to all of the other BGP routers that it's paired with.
It might be aware of more paths but it only advertises the shortest one.
And it means that all BGP networks work together to create a dynamic and ever-changing topology of all interconnected networks.
It's how many large enterprise networks function.
It's how the internet works.
And it's how routes are learned and communicated when using direct connect and dynamic VPNs with an AWS.
And that's all of the theory that I wanted to cover in this lesson.
So go ahead, finish off this video.
And when you're ready, I look forward to you joining me in the next.
-
- Sep 2024
-
learn.cantrill.io learn.cantrill.io
-
Welcome back in this video I want to talk about SSL and TLS.
At a very high level they do the same thing.
SSL stands for Secure Sockets Layer whereas TLS is Transcore Layer Security.
TLS is just a newer and more secure version of SSL.
Now we've got a lot to cover so let's jump in and get started.
TLS and historically SSL provide privacy and data integrity between client and server.
If you browse through this site to Netflix, to your bank and to almost any responsible internet business, TLS will be used for the communications between the client and the server.
TLS performs a few main functions and while these are separate, they're usually performed together and referred to as TLS or SSL.
First, TLS ensures privacy and it does this by ensuring communications made between a client and server are encrypted so that only the client and server have access to the unencrypted information.
When using TLS the process starts with an asymmetric encryption architecture.
If you've watched my encryption 101 video, you'll know that this means that a server can make its public key available to any clients so that clients can encrypt data that only that server can decrypt.
Asymmetric encryption allows for this trustless encryption where you don't need to arrange for the transfer of keys over a different secure medium.
As soon as possible though you should aim to move from asymmetric towards symmetric encryption and use symmetric encryption for any ongoing encryption requirements because computationally it's far easier to perform symmetric encryption.
So part of the negotiation process which TLS performs is moving from asymmetric to symmetric encryption.
Another function that TLS provides is identity verification.
This is generally used so that the server that you think you're connecting to, for example Netflix.com, is in fact Netflix.com.
TLS is actually capable of performing full two-way verification but generally for the vast majority of situations it's the client which is verifying the server and this is done using public key cryptography which I'll talk more about soon.
Finally TLS ensures a reliable connection.
This is a very simple way to do it.
The client can detects against the alteration of data in transit.
If data is altered then the protocol can detect this alteration.
Now in order to understand TLS a little better let's have a look at the architecture visually.
When a client initiates communications with a server and TLS is used there are three main phases to initiate secure communication.
First Cypher suites are agreed, authentication happens and then keys are exchanged.
These three phases start from the point that a TCP connection is active between the client and the server so this is layer four.
And at the end of the three phases there's an encryption communication channel between a client and a server.
This each stage is responsible for one very specific set of functions.
The first stage focuses on Cypher suites.
Now a Cypher suite is a set of protocols used by TLS.
This includes a key exchange algorithm, a bulk encryption algorithm and a message authentication code algorithm or MAC.
Now there are different algorithms and versions of algorithms for each of these and specific versions and types grouped together are known as a Cypher suite.
So to communicate the client and server have to agree a Cypher suite to use.
Now let's step through this visually.
We have a client and a server and at this starting point we already have a TCP connection so TCP segments between the client and the server.
The first step is that the client does a client hello and this contains the SSL or TLS version, a list of Cypher suites that the client supports and other things like a session ID and extensions.
Hopefully at this point the server supports one of the Cypher suites that the client also supports.
If not then the connection fails.
If it does then it picks a specific one and it returns this as part of the server hello.
Now included in this server hello is also the server certificate which includes the server's public key.
Now this public key can be used to encrypt data which the client can send to the server which only the server can decrypt using its private key.
But keep in mind that this is asymmetric encryption and it's really computationally heavy and we want to move away from this as soon as possible.
Now at some point in the past the server has generated a private and public key pair and it's the public part of this which is sent back to the client.
But and this is important part of TLS is ID validation.
If the client just confirmed that the server it's communicating with is valid then you could exploit this.
I could create a server which pretends to be Netflix.com without being Netflix.com and this is suboptimal.
So it's important to understand and I'll talk more about this in a second that part of the functionality provided by TLS is to verify that the server that you're communicating with is the server that it claims to be.
The next step of the TLS process is authentication.
The client needs to be able to validate that the server certificate the server provides is valid, that its public key is valid and as such that the server itself is valid.
To illustrate how this works let's rewind a little from a time perspective.
So the server has a certificate.
Now the certificate you can think of as a document, a piece of data which contains its public key, its DNS name and other pieces of organizational information.
Now there's another entity involved here known as a public certificate authority or CA.
Now there are a few of these run by independent companies and your operating system and browser trust many of these authorities and which ones is controlled by the operating system and browser vendors.
Now at some point in the past our server and let's say this is for Categoram.io created a public and private key pair and in addition it generated a certificate signing request or CSR.
It provided the CSR to one of the public certificate authorities and in return this CA delivered back a signed certificate.
The CA signed a certificate which means that you can verify that the CA signed that certificate.
If your operating system or browser trusts the certificate authority then it means your operating system or browser can verify that the CA that it trusts signed that cert.
This means that your OS or browser trusts the certificate and now the Categoram.io server that we're using as an example has this certificate and that certificate has been provided to the client as part of the server hello in Stage 1 of the TLS negotiation.
In Stage 2 of authentication our client which has the server certificate validates that the public certificate authority signed that certificate.
It makes sure that it was signed by that specific CA, it makes sure that the certificate hasn't expired, it verifies that the certificate hasn't been revoked and it verifies that the DNS name that the browser is using in this case Categoram.io matches the name or the names on the certificate.
This proves that the server ID is valid and it does this using this third party CA.
Next the client attempts to encrypt some random data and send it to the server using the public key within the certificate and this makes sure that the server has the corresponding private key.
This is the final stage of authentication.
If we're at this point and everything is good, the client trusts the server, its ID has been validated and the client knows that the server can decrypt data which is being sent.
It's at this point that we move on to the final phase which is the key exchange phase.
This phase is where we move from asymmetric encryption to symmetric encryption.
This means it's much easier computationally to encrypt and decrypt data at high speeds.
We start this phase with a valid public key on the client and a matching private key on the server.
The client generates what's known as a pre-master key and it encrypts this using the server's public key and sends it through to the server.
The server decrypts this with its private key and so now both sides have the exact same pre-master key.
Now based on the cipher suite that's being used, both sides now follow the same process to convert this pre-master key into what's known as a master secret.
And because the same process is followed on the same pre-master key, both sides then have the same master secret.
The master secret is used over the lifetime of the connection to create many session keys and it's these keys which are used to encrypt and decrypt data.
So at this point both sides confirm the process and from this point onwards the connection between the client and server is encrypted using different session keys over time.
So this is the process that's followed when using TLS.
Essentially we verified the identity of the server that we're communicating with, we've negotiated an encryption method to use, we've exchanged asymmetric for symmetric encryption keys and we've initiated this secure communications channel.
And this process happens each and every time that you communicate with the server using HTTPS.
Now that's everything I wanted to cover within this video so go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.
Thank you.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this fundamentals lesson, I want to step you through how to convert decimal numbers into binary and back again, specifically relating to IP version 4 addresses.
Now, if this is something which is new to you, I really suggest taking this slowly and focusing on each step of the process.
If you don't understand something, just pause, think about it, and if needed, restart the video.
I promise, once you can do this process, it will really help you with networking.
The process is easy once you've done it a few times, so I want to explain it as simply as I can at first, and then suggest that you get some practice.
So let's jump in and get started.
Before we start with the actual process, I want to set the scene as to why you need to understand it.
When dealing with IP addresses as a human, you might see something which looks like this.
133.33.33.7.
This is an IP address represented in what's known as dotted decimal notation.
Four decimal numbers ranging from 0 to 255 separated by periods.
So that's what a human sees.
A computer sees this, the same IP address put in binary.
Now, this is actually a 32-bit binary number, specifically four sets of 8 bits.
Now, 8 bits is known as a byte, each byte is also known as an octet, but crucially, it's the same number as the decimal version.
To understand much of how the internet protocol works, so things like IP ranges, prefixes, subnet masks and routing, you have to be able to convert between the two, so decimal and binary.
All of these things only make sense when viewing IP addresses in binary, so when it comes to networking, being able to convert between decimal and binary or vice versa really is a superpower, and that's what I want to teach you in this lesson.
So let's move on.
I want to start with the process of converting a decimal IP address into binary.
This is actually the more complex direction.
Decimal to binary is confusing at first, whereas binary to decimal is much easier.
When you're just learning this process, I find that it's easier to tackle it in byte-sized pieces.
Let's say that you want to convert this IP, so 133.33.33.7, into binary.
Well, each decimal in between the dots, so 133, 33, 33 and 7 is a number between 0 to 255.
So I find it easier, at least initially, to tackle each one of these numbers individually, working left to right.
So that's what we're going to do in this lesson.
I'm going to step through the maths involved to convert 133 and 33 into binary, and then I'm going to ask you to follow the same process for the last two numbers.
Now, before I step through this process, let me introduce this table.
This helps us with binary maths because it tells us what the decimal value is for each position in binary.
Now, this table has eight positions, shown as 1 to 8 on the table, and so it works for 8-bit binary numbers.
Remember, each part of an IP address, each number between the periods, is actually an 8-bit binary number.
Now, each position in this table has a value, so 128, then 64, then 32, 16, 8, 4, 2 and finally 1.
It means a binary 1 in that position has the associated value in decimal.
So a 1 at position 1 is 128 in decimal, a 1 at position 4 is 16 in decimal, and a 1 at position 8 is 1 in decimal.
We're going to use this table to help us convert decimal numbers into binary.
We're going to move in the table from left to right, starting at position 1, moving through to position 8.
At each position, we're going to follow a process.
It's going to be the same process with a number of rules, and if you learn how to follow these rules, it makes the conversion really easy.
So once again, for a given decimal part of the IP address, so in this case, the first one we're going to look at is 1, 3, 3.
We're going to work through this table left to right, and as I mentioned earlier, I'm going to demonstrate the process for 1, 3, 3 and 33, and then you're going to follow the process for 33 and 7.
So we start on the table in position 1, we compare our decimal numbers, so 133, to the value in the current position of the table, which is 128.
Now, rule number one is that if our number is smaller than the corresponding binary position value, then you write 0 in that position at the table that you're in, in this case, position 1, and then you move on to the next position, starting processing using rule number one again.
However, if our number is greater or equal to the binary position value, then rule 2 applies.
Now, this is the case for us because 133 is more than 128.
So what we do is we minus the binary position value, so 128, from our number, so that means 133 minus 128, this leaves us with 5.
So 5 is the remaining decimal value that we have.
And then we write 1 in this column and move on to position 2.
A binary 1 in this position is equal to 128, so all that we've done is transfer 128 of the decimal value into the corresponding binary value.
So we've added a binary 1 here and we've removed 128 from our decimal number, leaving 5.
And now we can continue the process to convert this remaining 5 into binary.
So now we reset the process, we move to position 2 in the table and start evaluating again using rule number 1.
So we're comparing our remaining value, so we have 5 in decimal, against the binary position value in position 2, which is 64.
So now we compare our number 5 to the value in the table, which is 64.
So in our case, our decimal value of 5 is smaller than the binary value of position 2, and so we're using rule number 1.
We add 0 into this column in the table and then we go right back to the start of the process.
We move on to position 3 and we start evaluating against rule number 1.
We repeat the same process for positions 3, 4 and 5.
Our decimal value of 5 is less than all of those values, so 32, 16 and 8.
So we add 0 in each of those columns and we move on.
So we've evaluated all of those and they match rule number 1 and so we add 0 and we move on.
We're just following the same basic rules.
So now we're at position 6 in the table, we compare our remaining decimal number, so 5, against the value in position 6, which is 4.
Now it's not smaller, so we move past rule 1, it is larger or equal and so we use rule number 2.
We add a value of 1 in this binary position and minus this binary position value from our number, so 5 minus 4 equals 1.
So we have 1 remaining in decimal.
A binary value of 1 in position 6 is equal to 4 in decimal.
So we've just added this to the binary number and removed it from the decimal number.
What this process is doing, bit by bit, is removing value from the decimal value and adding it to the binary 1.
First it was 128 of the decimal number and just now it was 4 of the decimal number.
And next we move on to the next position in the table.
So now we're looking at position 7.
We're comparing our remaining decimal value of 1 with the binary position value.
First we're evaluating against rule 1 and because our remaining value of 1 is less than the binary position value, we use a 0 in this column and we move on to the next position.
So now we're on position 8.
Again we do a comparison.
Our decimal value is 1.
The binary position value is 1 and so we evaluate this against rule 1.
It's not less.
We evaluate it against rule 2.
It is equal or larger.
And so we add a 1 in this column at the table and then we remove the binary position value from our decimal value.
So 1 minus 1 is 0 and because we have 0, the process is finished.
And this means that 133 in decimal is expressed as 1, 0, 0, 0, 1, 0, 1 in binary.
Now this is one example, so this is converting one part of an IP address.
Let's run this process through again with the next part of this IP address.
So this time we're converting the second decimal part of the IP, so 33, into binary.
We start again with our same process and our same table and we move through the table left to right.
We start with position 1.
We compare our decimal number, so 33, to position 1's value, which is 128.
Is our number less than 128?
Yes it is and so we use rule number 1.
We add a 0 into this position at the table and then move on to position 2.
We compare our decimal number to the value in position 2.
Is our number less than 64?
Yes it is and so we use rule number 1.
We add a 0 into the table and move on to position 3.
We compare our decimal number, so 33, to position 3's value, which is 32.
Is our decimal number less than 32?
No it's not, so we skip rule number 1 and move to rule number 2.
We minus 32 from our decimal number 33, leaving us with 1's.
So we've transferred 32 at the decimal value into the binary value and then we move on to position 4 in the table.
Now hope at this point you start to feel more comfortable and we can speed up.
At this point we're at position 4 and we can simply repeat the process.
We compare values if our remaining decimal value is less than the value in the table for that position and we add 0.
And this is the case for positions 4, 5, 6 and 7.
So we add 0's into all of those positions.
Then finally we have position number 8.
And we compare our remaining decimal number, which is 1, with the value in position 8, which is also 1.
Is our value less?
No it's not, so rule number 1 isn't used.
Is our value larger or equal?
Yes it is, and so we use rule number 2.
So we write down 1 in position number 8.
We minus the binary position value, which is 1, from this number, leaving us 0, and that means we've finished the process.
So the binary value for 33 is 0, 0, 1, 0, 0, 0, 0, 1.
Now I want you to try and do this process on your own, on some paper, for the third value of 33, without looking at this lesson.
This process is the same, but it will let you practice yourself.
And once you've reached the same value as me, you can follow the same process for the fourth decimal value of 7.
Once you've done them all, you'll have the full 32-digit binary number, which represents 133.33.33.7.
So go ahead and pause the video and do the final two calculations yourself, and you can resume it once you've finished.
Okay, so I hope that was pretty easy for you, and what you should have for the last decimal value of 7, is a binary value of 0, 0, 0, 0, 0, 1, 1, 1.
If you did, awesome.
That means you've completed it successfully.
If not, don't worry, just watch the first part of this video again, and repeat the process.
Now at this point I want to show you how this works in reverse, converting binary to decimal.
This time let's say that you want to convert this binary IP address into decimal.
So both of these are one and the same.
To start with, we break the IP address into four sections, and this is the same for the binary and the dotted decimal versions.
So each decimal part between the dots represents the corresponding 8-bit binary value.
So I've colored these in red, blue, green, and yellow, so left, middle, left, middle, right, and right.
So this is one and the same.
They're the same IP address expressed in binary on the left and decimal on the right.
Now just as before, the easiest way to tackle this is to work left to right, working on one octet, one by one.
The process of converting this in this direction is much easier.
We have the conversion table just as before.
Each binary bit left to right has a value.
One to eight on the left and one on the right, and we read them one by one.
So we need to the 8-bit binary components, so each of the different colored squares we go through from left to right, and we look whether there's ones or zeros in each position.
If there's a one, we take the corresponding number from the table at the bottom of the screen.
So the first example, the one on the left represents 128.
So we write that down.
We write 128 and then a plus.
If there's a zero, and this is the case for the following four bits, so zero, zero, zero, zero, then we add a zero in our equation.
So we have one to eight, plus zero, plus zero, plus zero, plus zero.
Then we have another one, so we look at the table at the bottom for the corresponding binary position value.
In this case, the number four, so we add that.
Then we have a zero, so we put zero.
And then we have another one, so we again look for the corresponding value in the table, which is a one.
And we add all of those together to get the result.
In this case, 133.
So this represents the first part of the IP address, and we follow the same process for all of the remaining parts.
So zero, zero, one, zero, zero, zero, zero, one represents 33.
We just take each of the ones in each of the components of the IP address and look up the corresponding binary position value in the table.
So in the second component of the IP address, there's a one in position three, which is 32, and a one in position eight, which is one, and this represents 33.
The same is true in the third component, and in the fourth component, there's a one in position six, seven, and eight, which represents four, two, and one.
So we add four, two, and one together to get the result and value.
So if you follow that process bit by bit for each eight-bit component of the binary IP address, then you will end up with the dotted decimal version of the IP address, which is exactly what we've done here.
So why don't you go ahead and pick a random IP address and follow the same process through and see if you get the correct result.
And then once you have the correct result, take that dotted decimal IP address and follow the previous process to convert it from decimal to binary, and you should end up with the same result that you started with.
If you do, that means you understand the end-to-end process of binary to decimal and decimal to binary, and I promise you this does represent a superpower, so it's really important that you understand.
At this point, I'll let everything I want to cover, so go ahead and complete the video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover a few really important topics which will be super useful as you progress your general IT career, but especially so for anyone who is working with traditional or hybrid networking.
Now I want to start by covering what a VLAN is and why you need them, then talk a little bit about Trump connections and finally cover a more advanced version of VLANs called Q in Q.
Now I've got a lot to cover so let's just jump in and get started straight away.
Let's start with what I've talked about in my technical fundamentals lesson so far.
This is a physical network segment.
It has a total of eight devices, all connected to a single, layer 2 capable device, a switch.
Each LAN, as I talked about before, is a shared broadcast domain.
Any frames which are addressed to all Fs will be broadcast on all ports of the switch and reach all devices.
Now this might be fine with eight devices but it doesn't scale very well.
Every additional device creates yet more broadcast traffic.
Because we're using a switch, each port is a different collision domain and so by using a switch rather than a layer 1 hub we do improve performance.
Now this local network also has three distinct groups of users.
We've got the game testers in orange, we've got sales in blue and finance in green.
Now ideally we want to separate the different groups of devices from one another.
In larger businesses you might have a requirement for different segments of the network from normal devices, for servers and for other infrastructure.
Different segments for security systems and CCTV and maybe different ones for IoT devices and IP telephony.
Now if we only had access to physical networks this would be a challenge.
Let's have a look at why.
Let's say that we talk each of the three groups and split them into either different floors or even different buildings.
On the left finance, in the middle game testers and on the right sales.
Each of these buildings would then have its own switch and the switches in those buildings would be connected to devices also in those buildings.
Which for now is all the finance, all the game tester and all the sales teams and machines.
Now these switches aren't connected and because of that each one is its own broadcast domain.
This would be how things would look in the real world if we only had access to physical networking.
And this is fine if different groups don't need to communicate with us so we don't require cross domain communication.
The issue right now is that none of these switches are connected so the switches have no layer 2 communications between them.
If we wanted to do cross building or cross domain communications then we could connect the switches.
But this creates one larger broadcast domain which moves us back to the architecture on the previous screen.
What's perhaps more of a problem in this entirely physical networking world is what happens if a staff member changes role but not building.
In this case moving from sales to game tester.
In this case you need to physically run a new cable from the middle switch to the building on the right.
If this happens often it doesn't scale very well and that is why some form of virtual local area networking is required.
And that's why VLANs are invaluable.
Let's have a look at how we support VLANs using layer 2 as the OSI 7-Line model.
This is a normal Ethernet frame.
In the context of this lesson what's important is that it has a source and destination MAC address fields together with a payload.
Now the payload carries the data.
The source MAC address is the MAC address of the device which is creating and sending the frame.
The destination MAC address can contain a specific MAC address which means that it's a unique S-frame to a frame that's destined for one other device.
Or it can contain all F's which is known as a broadcast.
And it means that all of the devices on the same layer 2 network will see that frame.
What a standard frame doesn't offer us is any way to isolate devices into different parts, different networks.
And that's where a new standard comes in handy which is known as 802.1Q, also known as .1Q. .1Q changes the frame format of the standard Ethernet frame by adding a new field, a 32-bit field in the middle in Scion.
The maximum size of the frame as a result can be larger to accommodate this new data. 12 bits of this 32-bit field can be used to store values from 0 through to 4095.
This represents a total of 4096 values.
This is used for the VLAN ID or VID.
A 0 in this 12-bit value signifies no VLAN and 1 is generally used to signify the management VLAN.
The others can be used as desired by the local network admin.
What this means is that any .1Q frames can be a member of over 4,000 VLANs.
And this means that you can create separate virtual LANs or VLANs in the same layer 2 physical network.
A broadcast frame so anything that's destined to all PEPs would only reach all the devices which are in the same VLAN.
Essentially, it creates over 4,000 different broadcast domains in the same physical network.
You might have a VLAN for CCTV, a VLAN for servers, a VLAN for game testing, a VLAN for guests and many more.
Anything that you can think of and can architect can be supported from a networking perspective using VLANs.
But I want you to imagine even bigger.
Think about a scenario where you as a business have multiple sites and each site is in a different area of the country.
Now each site has the same set of VLANs.
You could connect them using a dedicated wide area network and carry all of those different company specific VLANs and that would be fine.
But what if you wanted to use a comms provider, a service provider who could provide you with this wide area network capability?
What if the comms provider also uses VLANs to distinguish between their different clients?
Well, you might face a situation where you use VLAN 1337 and another client of the comms provider also uses VLAN 1337.
Now to help with this scenario, another standard comes to the rescue, 802.1AD.
And this is known as Q in Q, also known as provider bridging or stacked VLANs.
This adds another space in the frame for another VLAN field.
So now instead of just the one field for 802.1Q VLANs, now you have two.
You keep the same customer VLAN field and this is known as the C tag or customer tag.
But you add another VLAN field called the service tag or the S tag.
This means that the service provider can use VLANs to isolate their customer traffic while allowing each customer to also use VLANs internally.
As the customer, you can tag frames with your VLANs and then when those frames move onto the service provider network, they can tag with the VLAN ID which represents you as a customer.
Once the frame reaches another of your sites over the service provider network, then the S tag is removed and the frame is passed back to you as a standard .1Q frame with your customer VLAN still tagged.
Q in Q tends to be used for larger, more complex networks and .1Q is used in smaller networks as well as cloud platforms such as AWS.
For the remainder of this lesson, I'm going to focus on .1Q though if you're taking an advanced networking course of mine, I will be returning to the Q in Q topic in much more detail.
For now though, let's move on and look visually at how .1Q works.
This is a cut down version of the previous physical network I talked about, only this time instead of the three groups of devices we have two.
So on the left we have the finance building and on the right we have game testers.
Inside these networks we have switches and connected to these switches are two groups of machines.
These switches have been configured to use 802.1Q and ports have been configured in a very specific way which I'm going to talk about now.
So what makes .1Q really cool is that I've shown these different device types as separate buildings but they don't have to be.
Different groupings of devices can operate on the same layer to switch and I'll show you how that works in a second.
With 802.1Q ports and switches are defined as either access ports or trunk ports and access ports generally has one specific VLAN ID or vid associated with it.
A trunk conceptually has all VLAN IDs associated with it.
So let's say that we allocate the finance team devices to VLAN 20 and the game tester devices to VLAN 10.
We could easily hit any other numbers, remember we have over 4,000 to choose from, but for this example let's keep it simple and keep 10 and 20.
Now right now these buildings are separate broadcast domains because they have separate switches which are not connected and they have devices within them.
Two laptops connected to switch number one for the finance team and two laptops connected to switch number two for the game tester team.
Now I mentioned earlier that we have two types of switch ports in a VLAN cable network.
The first are access ports and the ports which the orange laptops on the right are connected to are examples of access ports.
Access ports communicate with devices using standard Ethernet which means no VLAN tags are applied to the frames.
So in this case the laptop at the top right sends a frame to the switch and let's say that this frame is a broadcast frame.
When the frame exits an access port it's tagged with a VLAN that the access port is assigned to.
In this case VLAN 10 which is the orange VLAN.
Now because this is a broadcast frame the switch now has to decide what to do with the frame and the default behaviour for switches is to forward the broadcast out of all ports except the one that it was received on.
For switches using VLANs this is slightly different.
First it forwards to any other access ports on the same VLAN but the tagging will be removed.
This is important because devices connected to access ports won't always understand 802.1Q so they expect normal untagged frames.
In addition the switch will fold frames over any trunk ports.
A trunk port in this context is a port between two switches for example this one between switch two and switch one.
Now a trunk port is a connection between two dot 1Q capable devices.
It forwards all frames and it includes the VLAN tagging.
So in this case the frame will also be forwarded over to switch one tagged as VLAN 10 which is the gain tester VLAN.
So tagged dot 1Q frames they only get forwarded to other access ports with the same VLAN but they have the tag stripped or they get forwarded across trunk ports with the VLAN tagging intact.
And this is how broadcast frames work.
For unicast ones which go to a specific single MAC address well these will be either forwarded to an access port in the same VLAN that the specific device is on or if the switch isn't aware of the MAC address of that device in the same VLAN then it will do a broadcast.
Now let's say that we have a device on the finance VLAN connected to switch two.
And let's say that the bottom left laptop sends a broadcast frame on the finance VLAN.
Can you see what happens to this frame now?
Well first it will go to any other devices in the same VLAN using access ports meaning the top left laptop and in that case the VLAN tag will be removed.
It will also be forwarded out of any trunk ports tagged with VLAN 20 so the green finance VLAN.
It will arrive at switch two with the VLAN tag still there and then it will be forwarded to any access ports on the same VLAN so VLAN 20 on that switch but the VLAN tagging will be removed.
Using virtual LANs in this way allows you to create multiple virtual LANs or VLANs.
With this visual you have two different networks.
The finance network in green so the two laptops on the left and the one at this middle bottom and then you have the gain testing network so VLAN 10 meaning the orange one on the right.
Both of these are isolated.
Devices cannot communicate between VLANs which are led to networks without a device operating between them such as a layer 3 router.
Both of these virtual networks operate over the top of the physical network and it means that now we can configure this network in using virtual configuration software which can be configured on the switches.
Now VLANs are how certain things within AWS such as public and private vifs on direct connect works so keep this lesson in mind when I'm talking about direct connect.
A few summary points though that I do want to cover before I finish up with this lesson.
First VLANs allow you to create separate layer 2 network segments and these provide isolation so traffic is isolated within these VLANs.
If you don't configure and deploy a router between different VLANs then frames cannot leave that VLAN boundary so they're virtual networks and these are ideal if you want to configure different virtual networks for different customers or if you want to access different networks for example when you're using direct connect to access VPCs.
VLANs offer separate broadcast domains and this is important.
They create completely separate virtual network segments so any broadcast frames within a VLAN won't leave that VLAN boundary.
If you see any mention of 802.1Q then you know that means VLANs.
If you see any mention of VLANs stacking or provide a bridging or 802.1AD or Q in Q this means nested VLANs.
So having a customer tag and a service tag allowing you to have VLANs in VLANs and these are really useful if you want to use VLANs on your internal business network and then use a service provider to provide wide area network connectivity who also uses VLANs and if you are doing any networking exams then you will need to understand Q in Q as well as 802.1Q.
So with that being said that's everything I wanted to cover.
Go ahead and complete this video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to step through the architecture and challenges of distributed denial of service attacks known as DDoS attacks.
Now we've got a lot to cover, so let's get started.
Distributed denial of service attacks come in many forms, a few different ways of achieving the same end goal, which is to overload websites or other internet-based services.
The idea is to generate some kind of traffic which competes against legitimate connections and overloads the hardware or software providing the service.
Imagine trying to get into an Apple Store on the day when a new iPhone is released.
How hard is it to get into the store and get service?
What if you added 100,000 random people who just want to queue for no reason and waste the time of the Apple Store staff?
That's the physical equivalent of a DDoS attack.
The challenge when dealing with DDoS attacks comes from the distributed nature of those attacks.
It's hard to identify and block traffic because there can be millions of IP addresses involved with larger internet-scale attacks.
Dealing with DDoS attacks requires specific hardware or software protections.
We won't be covering those in this video, I'm limiting this to just discussing how DDoS attacks work, so the architecture of all the different types of DDoS attacks.
Now DDoS attacks themselves generally fit into one of three categories.
First, application layer attacks such as HTTP floods, and these take advantage of the imbalance of processing between client and server.
It's easy for you to request a web page, but it's often very complex for a server to deliver that same page.
If you multiply that load difference by a billion, it's easy for you to request a web page, but it's often very complex for a server to deliver that same page.
If you multiply that load difference by a billion, then you can have a potentially devastating attack.
Next, we've got protocol-based attacks such as SYNFLUDS, and SYNFLUDS takes advantage of the connection-based nature of requests.
Normally, a connection is initiated via a three-stage handshake, which I detailed in a separate video of this series.
While SYNFLUDS spoof a source ID address and initiate the connection attempt with a server, the server tries to perform step-to-other handshake, but it can't contact the source address because it's spoofed.
In general, it hangs here waiting for a specified duration, and this consumes network resources.
And again, if you multiply this effect by a billion, this can have significant impact on your ability to provide a service.
Lastly, we have volumetric attacks such as DNS amplification, and this relies on how certain protocols such as DNS only take small amounts of data to make the request such as a DNS resolution request, but in response to that, they can deliver a large amount of data.
So one example is an attack of this nature might make requests to DNS servers with a large number of independent requests, where the source address is spoofed to be the actual IP address of our website.
And their servers, potentially hundreds or thousands of them, respond to what they see as legitimate requests and overwhelm a service.
D-dot attacks are often orchestrated by one or a small number of people who are in control of huge botnets, and botnets are constructed of machines such as your laptop or your desktop infected with malware.
Most attacks come from these botnets which are constructed from infected hosts, and the owners of these hosts don't even realize that they're part of the attack.
Now let's look at how these attacks work visually, but before we do that, it's worth reviewing what a valid application architecture should look like.
When a website is working as intended, it looks something like this.
First, we have a number of servers which provide the website functionality, in this case, Categoram.io.
These servers are normally provisioned either based on normal load plus a bit of extra as a buffer, or they're built to autoscale, which means adding more servers when loading creases and removing servers as load decreases.
Now these servers run within a hosting environment which is connected to the public internet via a data connection, which, depending on the speed of this connection, has a limited amount of data that it can transfer and a limit on the number of connections it can handle.
Then our application has users who are using a mobile application to upload their latest captures using TCP port 443.
Now this is HTTPS, and these connections move across our data connection and arrive at the application servers.
Now in normal circumstances, the vast majority of these connections will be from legitimate users of the application.
So this is how it should work.
We have an application, the servers are sized appropriately, we have an appropriate data connection, and our users are accessing the application using this infrastructure.
Now let's step through what happens with the various different forms of DDoT attack.
The first type of DDoT attack is the application layer attack.
Architecturally, behind the scenes we have an attacker who is controlling a network of compromise machines known as a botnet.
This botnet, or more specifically, the machines which form the botnet, are distributed geographically.
[Pause] [Pause] In most circumstances, the real owners of these machines have no knowledge that they've been compromised.
An application-led DDoT attack, as I mentioned at the start of this video, uses the computational imbalance of client-server communications as an attack method.
It's easy, for instance, for the botnets to make simple requests to the application.
In this case, a HTTP GET of a page called reallycomplex.php.
The botnet floats hundreds of thousands, or even tens of thousands of these requests, each of them directed towards the Catergram servers.
This would mean millions or more of these really simple requests, which are all requesting, are reallycomplex.php page.
The issue is that while making these requests is simple, responding to these requests can be computationally expensive, and this can have disastrous effects on the servers.
It's like throwing hand grenades.
They're easy to throw, but they're much more difficult to deal with at the receiving end.
The effect is that our servers, or the data connection, won't have the capacity required to deal with the requests in total.
The fake attack-based requests will prevent the legitimate requests reaching the servers in a timely way, and this can cause performance issues or failures, essentially a general decrease in service levels.
Now, as I mentioned earlier in this video, you can't simply block traffic from individual machines, because there can be millions of them, and the data they're sending can in many ways look exactly the same as legitimate traffic, and this is why you have to handle DDoS attacks in a very specific way.
Now, at this point, let's move on and take a look at another type of DDoS attack.
This time, it's a protocol-based attack.
So, with a protocol-based attack, we follow the same basic architecture, where a single or a small group of attackers is controlling a large botnet, and this botnet is constructed of compromised hosts.
Now, with a protocol attack such as a SYN flood, essentially what happens is a botnet generates a huge number of spoofed SYNs, and SYNs are the initial part of this three-way connection handshake.
So, essentially, all of these individual machines attempt to initiate a connection with the Catergram.io infrastructure, but crucially, they're using a spoofed IP address.
In normal circumstances, if these were real connections, what should happen is our server infrastructure would respond back with SYN acts, which are the second part of the three-way handshake.
Normally, these connection attempts would be from real IP addresses, so IP addresses which are expecting to receive this second part of the three-way handshake.
But because our botnet has initiated these connections with spoofed IP addresses, there won't be anything on the receiving end for our servers to communicate with, and so, in this case, these requests will simply be ignored.
Because they're being ignored, it means the connections will stay in this hung state.
The network resources which would otherwise be used for legitimate connections are waiting for this second part of the three-way handshake, and because the botnet is providing millions and millions of these fake connections, it can mean that network resources are completely consumed with these fake connections, and that means that our legitimate connections won't be able to connect into our infrastructure.
Essentially, by generating this protocol-based attack, this SYN flood, we're preventing the network resources being used for legitimate requests, and so we're essentially significantly impacting the network capability of this application infrastructure.
So, because this three-way handshake is designed to work with slower or less reliable connections, our Catergram.io infrastructure will wait.
It will attempt to connect to these fake source IP addresses.
And while these connections are waiting for the second part of the three-way handshake, these resources can't be used for legitimate connections.
And so, if the botnet is large enough, if it contains a sufficient number of compromised hosts, then it can, in theory, completely take down the service provided by the Catergram.io infrastructure.
Now, let's move on to the third type of attack which I want to cover in this video, and that's known as a volumetric or amplification-based attack.
Now, this type of attack is still orchestrated by a single person or a small group of people, but with this type of attack, the size of the botnet can be much smaller, because an amplification attack exploits a protocol data imbalance.
So, a situation where only a small amount of data is required to initiate a request, but the response to that request is much larger.
In this case, our smaller botnet makes a large number of requests to a large number of DNS servers.
The requests can be made to a large number of DNS servers and be done frequently, because the amount of data that it takes to make the request is relatively small.
Now, the botnet will use a spoofed IP address, and it will use the IP address of our application infrastructure.
So, rather than the DNS servers responding to the botnet, the DNS servers will all respond to our application servers.
Now, the volume of data in each of those responses is much larger than the volume of data making the initial query to DNS.
Because of this, the application servers will be quickly overwhelmed.
This will generally affect the data connection to our application infrastructure, rather than the application server itself, and this will mean that legitimate application users experience degraded levels of performance, because they're competing to use the same total capacity of the application data connection with these fake responses coming in from all of these DNS servers.
So, this type of attack does impact...
Our application's ability to provide service, because the amount of data that our connection provides is consumed, but it's done so in a way which uses amplification.
So, rather than the botnet being required to consume the same amount of bandwidth as our application needs to tolerate, this type of attack can use a tiny amount of bandwidth to initiate the attack, but consume a large amount of bandwidth on the application side, and this makes this type of attack ideally suited to take down larger websites or applications.
So, these are three different common types of DDoS attacks, which you might come across as a solutions architect, an engineer, or a developer.
The important thing to understand about all types of DDoS attack is they can't be combated with normal network protection.
So, because of the distributed nature of the attacks, it's not practical to implement single IP address blocks.
If you're going to block an entire botnet, then you need to block potentially thousands, tens of thousands, hundreds of thousands, or even millions of IP addresses.
What's more, if you use a volumetric or amplification style attack, then the actual machines performing the attack might not even be malicious.
In this case, if you're taking advantage of DNS servers using a DNS amplification attack, then these servers, from their perspective, are doing nothing malicious.
They're just responding to requests.
And so, you have to be really careful that in order to mitigate a DDoS attack, you're not actually blocking legitimate traffic or impacting your application's ability to provide a service.
If you block all DNS servers, then potentially you can have other issues with your application.
Now, AWS and other cloud environments do provide products and services which are specifically designed to help you combat DDoS attacks.
And now that you're aware of the architecture and how these attacks can impact your application, it's going to be much easier for you to understand these different products and services.
Now, with that being said, that's everything that I wanted to cover in this video.
So, go ahead and complete the video.
And when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Subnetting is the process of breaking networks up into smaller and smaller pieces.
I've just talked about the class A, B, C, D, and E ranges.
Now, historically, you couldn't break them down.
You were allocated one and that was it.
Classless into domain routing or SIDA, let's just take networks and break them down.
It defines a way of expressing the size of a network.
And this is called a prefix.
An example of this is this /16 network.
So 10.16.0.0/16.
In this case, 16 is the prefix.
Now, you might spot that this is actually inside of the class A network space.
So class A is between 0.anything and 127.anything.
And you might also spot that it's in the private class A address space.
So 10.anything is a private set of addresses.
But this is only actually a subset of this wider network.
10.0.0.0/8 would be the full 10.anything range.
So /8 is the same as a class A network.
The first octet is the network and the rest is available for hosts or subnetting.
This is a /16, which means that the first two octets are the network, so 10.16, and the rest is available for hosts or subnetting.
So this /16 is a smaller network within the 10.0.0.0/8, bigger class A network.
The larger the prefix value, the smaller the network.
And that's a useful one to remember.
Subnetting is a pretty complex process to do well, but you can learn the basics easily enough.
Take this network as an example.
10.16.0.0/16.
If you watched my network fundamental series of videos, you'll know that 10.16.0.0/16 is a range which starts at 10.16.0.0 and ends at 10.16.255.255. /16 tells us that the network part is the first two octets, so 10 and 16.
The network range, therefore, is 0.0 to 255.255 in the hosts part of the IP address.
Now, let's say that we were allocated this range within our organization, but we needed to break it down into multiple networks.
So rather than one large network, let's say, for example, we needed four smaller networks.
But what we can do is to subnet this network.
We can break it down.
All we do is that we break this network into two.
The single /16 becomes two /17 networks.
The first network starts at the starting point of the original network and ends at the halfway point, the point at which the second network starts.
So the first subnetwork is 10.16.0.0 through to 10.16.127.255, so the halfway point.
And the second network starts at 10.16.128.0 and goes through to 10.16.255.255.
So one /16 network is the same as two /17 networks.
But now we can use these two networks for different things within our organization.
Now, we could follow the same process again if we needed more subnets.
For now, we could keep the first /17 network at the top in red, but break the bottom one in green into two networks, so two /18 networks.
The method would be the same, so the first subnetwork, so bottom left, would start at the starting point of the original bottom network, so 10.16.128.0.
The second smaller network, so bottom right, would start at the midway point of the original network.
So these two networks are both /18, which are half the size of the original /17.
And this gives us three subnets, a /17 at the top and two smaller /18s at the bottom.
We could follow the same process again.
Remember, our target here is for subnets, so we can break down the top /17 network into two /18 networks.
The upper left /18 network starts at the starting point of the previous /17 network, and it ends at the halfway point.
The upper right /18 network starts at the midpoint and goes through to the end.
So this is how subnetting and side-up works.
The entire internet is a /0 network.
That's why 0.0.0.0, which you'll see as a default route, matches the entire internet.
All the way through to a /8, which is a class A network, /16, which is a class B network, and /24, which is a class C network.
And then all the way through to /32, which represents a single IP address.
Now this process will become much clearer once you start using this in lab or production environments.
Generally, when you perform subnetting, you'll be breaking down a larger network into two, four, eight, or more smaller networks, always breaking into two and then into two again.
But while it is unusual, it is possible to have odd numbers.
You can break a network into two and then break only one half of that into two more, and this gives you three subnets.
And this is the example at the top right of the screen.
Now this is unusual, but it doesn't break any rules.
Subnetting is the process of taking a larger network and breaking it down into more smaller networks, each of which has a higher prefix, which means a smaller network.
So now that you know the high-level process, I've gone through it graphically.
Let's take a look at this in a little bit more detail before we finish.
We don't use the same example.
And plus before, only now with more detail.
So we start with a /16 network, 10.16.0.0.
Assuming we need four smaller networks, the starting point is to calculate the start and end of this network range.
In this case, 10.16.0.0/16 starts at 10.16.0.0, and finishes at 10.16.255.255.
So we know that any /17 networks will be half of this size.
So step two is to split the original range into two.
The first /17 network starts at the starting point of the original network, so 10.16.0.0, and ends halfway through at the original range, so 10.16.127.255.
So 10.16.0.0/17 means 10.16.0.0 through to 10.16.127.255.
The second smaller network starts at the midpoint, so 10.16.128.0/17, so this starts at 10.16.128.0, and ends at 10.16.255.255.
You've split the original /16 into two.
You've created two smaller /17 networks, each of which occupies half of the original address space.
Now, further splits follow the same process.
Each smaller network has a higher prefix value, and is half the size of the parent's network range.
Its first smaller network starts at the same starting address and finishes halfway, and the second one starts at the halfway point and finishes at the end.
In this case, we have 10.16.128.0/18, and 10.16.192.0/18.
Both of these are within the larger /17 range of 10.16.128.0/17.
If you just think about this process as splitting the network range in half, you're going to create two smaller networks, one which uses the first half and one which uses the second half.
And we can do the same process with the upper subnet, so 10.16.0.0/17, so the network in red.
We can split that range in half, creating two smaller networks.
We've got 10.16.0.0/18, and 10.16.64.0/18.
Now, becoming proficient with this process just takes time.
You need to understand how to calculate IP addresses, how subnet masks and prefixes work, and then you can just follow this process step by step to break down large networks into more and more smaller subnets.
Eventually, you won't even need to calculate it at all.
It will just become instinctive.
I know at this point it might seem like a fair distance off, but I promise it will happen.
Now, at this point, that's everything I wanted to cover in this lesson.
I know it's been a lot of theory.
Go ahead and finish the video.
I'm on your ready.
I look forward to you joining me in the next video.
-
-
learn.cantrill.io learn.cantrill.io
-
IP Address Space & Subnetting - PART1
Welcome back and welcome to another video of this Network Fundamental series where I'll be discussing IP addressing and IP subnetting.
Now we've got a lot to cover, so let's jump in and get started.
IP version 4 addressing has been around since the early days of the internet.
In fact, it was standardized in 1981 via the RFC 791 document, which is attached to this lesson if you want to take a look.
Now it's still the most popular network layer protocol in use on the internet.
IP version 4 addresses occupy a range from 0.0.0.0 to 255.255.255.255.
And this is just under 4.3 billion IP addresses.
Now that sounds like a lot, but with a current world population around the 8 billion mark, that's less than one IP version 4 address per person.
Now the address space was originally fully managed by an organization called IANA, which is the internet assigned numbers authority.
More recently though, parts of the address space have been delegated to regional authorities such as RIPE, ARIN and APNIC.
Now the key thing to understand is that with a few exceptions, IP version 4 addressing is allocated, and that means that you have to be allocated public IP version 4 addresses in order to use them.
You can't simply pick a random address and expect it to work on the public internet without significant issues.
Now there is part of the address space which is private, and that's the addresses which are generally used within home networks, business networks and cloud platforms such as AWS or Azure.
The private address space can be used and reused freely.
So now you know there are 4.294 billion IP version 4 addresses.
You know they start at 0.0.0.0 and end at 255.255.255.255.
Now historically, this range was divided into multiple smaller ranges which are allocated for specific functions.
First, the class A address space which starts at 0.0.0.0 and ends at 127.255.255.255.
Now this range contains 128 networks, each of which has 16.7 million addresses.
So these networks are 0.anything which is reserved 1.anything, 2.anything, all the way up to 127.anything.
The first octet denotes the network with the remaining octets available for hosts or for subnetting as we'll cover later in this video.
So this class of IP addresses, so class A, these were generally used for huge networks and historically these were allocated to huge businesses or organisations which had an internet presence in the early days of the internet.
So businesses like Apple, the Ford Motor Company, the US Postal Service or various parts of the US military.
Many of those organisations have since given up those ranges and these are now allocated to the regional managers of the IP address space for allocation to users in that region.
Now next we have the class B address space and this starts at 128.0.0.0 and it ends at 191.255.255.255.
Now this part of the IP address space offers a total of 16,384 networks, each of them containing 65,536 IP addresses.
So this space was typically used for larger businesses which didn't need a class A allocation.
Like with addresses in the class A space, these are now generally allocated to the regional authorities and they manage them and allocate them out to any organisation who requests and can justify addresses in this range.
Now these networks take the format of 128.0.anything, 128.1.anything, 128.2.anything and then all the way through to 191.253.anything, 191.254.anything and then finally 191.255.anything.
Now with this range of IP addresses so class B, the first two octets are for the network and the last two are for the organisation to assign to devices or to subnet into smaller networks and we'll be talking about that later in this video.
Next we have the class C range which starts at 192.0.0.0 and ends at 223.255.255.255.
Now this range provides over 2 million networks, each containing 256 IP addresses.
So examples of this range include 192.0.1.anything and 192.0.2.anything.
With class C networks, the first three octets denote the network and the remaining is available for hosts or for subnetting.
Class C networks are historically used for smaller businesses who required an IP version 4 presence but weren't large enough for class B or class A addressing and these two are generally now allocated and controlled by regional authorities.
Now there are two more classes, class B and class E, but these are beyond the scope of what this video covers.
Class B is used for multicast and class E is reserved, so I'll cover those at another time.
Now within this public IP version 4 space, certain networks are reserved for private use and this means you can use them however you want, but they aren't roundable across the public IP version 4 internet so these can only be used for private networks or cloud platforms such as AWS who often use them for private networking.
So let's take a look at those.
Private IP addresses are defined within a standards document called RFC1918 and this document defines three ranges of IP version 4 addresses which you're free to use internally.
Now these can't be routed across the internet, but you can use them as you choose internally and as often as required and this is one reason why network address translation is needed to translate these private addresses into publicly roundable addresses so they can communicate with the internet and I cover network address translation in a separate video.
The first private range is a single class A network which starts at 10.0.0.0 and ends at 10.255.255.255 and it provides a total of 16.7 million IP version 4 addresses.
Now this private range is often used within cloud environments and it's generally chopped up into smaller subnetworks which I'll be covering later on in this video.
The next private range is from 172.16.0.0 through to 172.31.255.255.
Now this is a collection of class B networks, 16 of them to be precise so you have 172.16.anything, 172.17.anything, 172.18.anything and so on all the way through to 172.31.anything.
Now each of these networks contains 65,536 addresses and in AWS one of these private ranges 172.31 is used for the default VPC and again these networks are generally broken into smaller subnetworks when used.
Lastly we have 192.168.0.0 to 192.168.255.255 and this range is 256 class C networks so that means 192.168.0.anything, 192.168.1.anything, 192.168.2.anything and so on all the way through to 192.168.255.anything so it provides 256 networks each containing 256 addresses and this range is generally used within home and small office networks so my home network for example uses one of these ranges for all of my devices and my router provides NAT or network address translation services in order to allow them to access the public internet.
Now with any of these ranges you can use them however you want, you can reuse them you can break them up into smaller networks but in all cases you should try and avoid using the same one multiple times.
If you ever need to connect private networks together and they use the same network addressing even if it's private you will have trouble configuring that communication.
Where possible you should always aim to allocate non-overlapping ranges to all of your networks.
So now let's move on to talk about IP version 6 and the differences between it and IP version 4.
So to fully understand the need for IP version 6 and the differences it's useful to start with a representation of the IP version 4 address space.
So we know now that this historically has been broken up into three common classes of IP addresses I've just talked about those.
All of these IP addresses except for a few exceptions are publicly routable meaning if you have one of them configured on a device then you can communicate with another device which is also using a public IP version 4 address.
Now I've also just talked about how part of this IP address space is dedicated for use for private networking but this in its entirety is the IP version 4 address space and in total there are 4,294,967,296 IP addresses so this is the total number of IP version 4 addresses available for use.
Now this might sound like a lot but that's less than one person alive on the planet today and how many of us have a mobile phone and the computer so we have multiple devices already.
What about providers like AWS who have huge public IP addressing requirements?
Well IP version 6 was designed to solve this problem.
The problem that we have far too few IP version 4 addresses and at this point we've essentially exhausted the supply.
With IP version 6 we have more IP addresses to use and to fully appreciate this I want to change the perspective.
This doesn't even do the scale justice but any smaller and you won't be able to see the blue square which now represents the total IP version 4 address space.
Imagine the blue square is actually several thousand times smaller than it is now and with that in mind this is how the IP version 6 address space looks in comparison.
The entire IP version 4 address space available on the public IP version 4 internet is just over 4 billion IP version 4 addresses.
With IP version 6 the entire address space is 340 trillion trillion trillion addresses.
Now humans are bad with large numbers but to put this into perspective it means that there are 670 quadrillion IP version 6 IP addresses per square millimeter of the Earth's surface or to put it another way 50 octillion IP addresses per human alive today or 79 octillion IP version 4 internet's worth of addressing within the IP version 6 address space.
Now think about that for a moment it's enough to give you a headache there are 79 octillion sets of 4.3 billion IP addresses in the IP version 6 address space.
That is an incredibly large number.
Now I don't expect you to remember all of these numbers.
What I want to do is make you comfortable with just how many IP version 6 addresses are available for use.
With IP version 6 the concept of IP addresses as a valuable commodity just goes away.
There are so many of them that you essentially don't require detailed IP management anymore it's just not a scarce resource.
So that's IP version 6.
Next I want to talk about subnetting from an IP version 4 perspective because this is a really useful skill that you should have when you start using a cloud environment.
Okay so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part 2 will be continuing immediately from the end of part one.
So go ahead complete the video and when you're ready join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, this is part two of this lesson.
We're going to continue immediately from the end of part one, so let's get started.
The principle of Dynamic NAT is similar to static except that devices are not allocated a permanent public IP.
Instead, they're allocated one temporarily from a pool.
Let's say that we have two public IP addresses available for use, 52.95.36.66 and 67.
But we have four devices on the left and all of them at some time need to use public addressing.
So we can't use static NAT because we don't have enough public IP addresses.
With Dynamic NAT, the public to private mapping is allocation-based, so it's allocated as required.
Let's look at an example.
Let's assume that the server on the top left is trying to access the CAT API.
While it creates a packet, the source IP address is itself and the destination IP is the CAT API, which is 1.3.3.7.
So it sends this packet and again the router in the middle is the default gateway for anything which is not local.
As the packet passes through the router or the NAT device, it checks if the private IP has a current allocation of public addressing from the pool and if it doesn't and one is available, it allocates one dynamically and on a temporary basis.
In this case, 52.95.36.67 is allocated on a temporary basis.
So the packets source IP address is translated to this address and the packets are sent onto their final destination.
The CAT API is able to send the response traffic back using this public IP allocation.
So this process is the same so far as if we were using static NAT.
But because Dynamic NAT allocates addressing on a dynamic and temporary basis, multiple private devices can share a single public IP as long as there is no overlap, so as long as the devices use the allocations at different times.
In this case, the upper laptop is accessing the CATFLIX public service using 52.95.36.66 and then afterwards, the lower laptop is using the same public IP address to access the dogflix application.
With Dynamic NAT, because the shared public pool of IP addresses is used, it is possible to run out of public IP addresses to allocate.
If the bottom server attempts to access the public internet, when there are no IPs available in the pool to allocate, then this access will fail.
Now this type of NAT is used when you have less public IPs than private ones, but when all of those private devices at some time need public access, which is bi-directional.
Now the last type of NAT which I want to talk about is the one which you're probably familiar with.
This is port address translation.
This is the type of NAT you likely use on your home network.
Port address translation is what allows a large number of private devices to share one public address.
It's how the AWS NAT gateway functions within the AWS environment.
It has a many to one mapping architecture.
So many private IP version 4 addresses are mapped onto one single public IP version 4 address.
Let's step through an example because this will make it easier to understand.
The example we'll be using is three private devices on the left, all wanting to access Catflix on the right, which has a public IP of 1.3.3.7, and is accessed using TCP port 443, which in this case is HTTPS.
And to make things easier, I'll be colour coding the laptops, so red for the top, purple for the middle, and yellow at the bottom.
Now the way the port address translation or PAT works is to use both IP addresses and ports to allow for multiple devices to share the same public IP.
Every TCP connection, in addition to a source and destination IP address, has a source and destination port.
The destination port for outgoing connections is important because that's what the service runs on.
In this case, Catflix uses the destination port of 443.
The source port, this is randomly assigned by the client.
So as long as the source port is always unique, then many private clients can use the same public IP.
Let's assume that the public IP address at this NAT device is 52.95.36.67.
So at this point, let's say that the top laptop, so the red laptop, generates a packet, and the packet is going to Catflix.
So its destination IP address is 1.3.3.7, and its destination port is 443.
Now the source IP of this packet is itself, so the laptop's private IP address, and the source port is 32.768, which is a randomly assigned ephemeral port.
So this packet is routed through the NAT device on its way to the internet, and in transit, the NAT device records the source IP and the original source private port, and it allocates a new public source port, which in this case is 1.3.3.7.
It records this information inside a NAT table, and it adjusts the packet or translates the packet, so that its source IP address is this single public address which the NAT device uses, and the source port is this newly allocated source port, which is now recorded within the NAT device.
And this newly adjusted packet is forwarded on to Catflix.
If the middle purple laptop did the same thing, then the same process would be followed.
It would record all of this information, it would allocate a new public source port, and it would translate the packet, so adjust the packet's source IP address and the source port, to these newly defined values.
Now if the bottom laptops or the yellow laptop generated a packet, note how this time the source port, which is randomly assigned, is the same source port that the top or red laptop is using for the same connection.
But the same process would be followed.
The NAT device would pick a unique source port to allocate, and it would translate this packet.
It would change the source IP address from the private IP to the single public IP, and it would change the source port of 32.768 to a unique new source port, in this case, 1.3.3.9.
Now normally the reason that only one device can use the same public IP is because these source ports are randomly assigned.
If multiple devices communicate with the same destination service, using the same destination port, and they happen to use the same source port, then it will look like the same connection.
What the NAT device is doing is creating this, a NAT table.
The table is updated with the original private IP and private source port, and the new source IP, which is the public IP address of the NAT device, and then the newly allocated public source port.
This means that when response data comes back, this table can be referenced to ensure that the packet reaches its destination.
So when return traffic occurs, it will be from TCP port 443, with a source IP address of 1.3.3.7.
The destination IP will be the NAT device's public IP, so 52.95.36.67.
And the destination port will be the public source port that NAT device initially translated to.
Let's say in this case, the public source port is 1.3.3.7, which represents the session of the top left laptop.
So for return traffic, if an entry is present in the NAT table, the NAT device translates the public IP and public port, which are the destination IP and port to the original IP, which is 10.0.0.42 to the top laptop, and 32.768, which is the original source port number.
Now it's worth pausing and making sure that you really understand how this process works, because it's how your home route works, and it's how the NAT gateway within AWS works.
Once you understand it, you'll understand why, with port address translation, you can't initiate traffic to these private devices, because without an entry in the NAT table, the NAT device won't know to which device traffic should be translated and forwarded to.
Now, I hope all of this has made sense, and you understand all of the different types of NAT.
NAT is a great topic to understand, because it's one of those things which is used constantly within most architectures, cloud platforms, and even home and business networks.
Now, that's everything I wanted to cover, though, so go ahead and complete this video, and when you're ready, I'll look forward to you joining me in the next video.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to talk about network address translation known as NAT.
NAT is used within home networks, business networks and cloud environments such as AWS.
If you have a thorough understanding of NAT and how it works, it will make using any type of networking including AWS so much easier.
Now I want to keep this video as brief and efficient as possible to let you jump in and take a look at exactly what NAT is and how it works.
So NAT is a process which is designed to address the growing shortage of IP version 4 addresses.
IP version 4 addresses are either publicly routable or they fall within the private address space of IP version 4.
Publicly routable addresses are assigned by a central agency and regional agencies which in turn assign them to ISPs and these ISPs allocate them to business or consumer end users.
An IP version 4 publicly routable addresses have to be unique in order to function correctly.
Private addresses such as those in the 10.0.0.0 range can be used in multiple places but can't be routed over the internet.
And so to give internet access to private devices, we need to use network address translation.
In addition to this, NAT also provides some additional security benefits which I'll be covering soon.
Now there are actually multiple types of NAT which I'm going to cover and all of them, they translate private IP addresses into public IP addresses so the packets can flow over public internet and then translate back in reverse.
So that internet-based hosts can communicate back with these private services.
So that's the high level function of NAT, which each type of NAT handles this process differently.
First we've got static NAT which is where you have a network of private IP version 4 addresses and can allocate a public IP version 4 address to individual private IP addresses.
So the static NAT device translates from one specific private address to one specific public address in effect giving that private address access to the public internet in both directions.
And this is how the internet gateway within AWS works which I'll be covering in another video.
Static NAT is what you would use when certain specific private IP addresses need access to the internet using a public ID and where these IPs need to be consistent.
Dynamic NAT is similar but there isn't this static allocation.
Instead you have a pool of public IP addresses to use and these are allocated as needed so when private IP addresses attempt to use the internet for something.
This method of NAT is generally used when you have a large number of private IP addresses and want them all to have internet access via public IPs but when you have less public IP addresses than private IP addresses and you want to be efficient with how they're used.
Then lastly we have port address translation and this is where many private addresses are translated onto a single public address.
This is likely what your home internet route does, you might have many devices so laptops, computers, tablets, phones and all of those will use port address translation also known as overloading to use a single public IP address.
Now this method as the name suggests uses ports to help identify individual devices and I'll cover in detail how this method works later in this video.
This is actually the method that the NAT gateway or NAT instances use within AWS if you have any AWS experience then you'll recognise this process when I'm talking about the NAT gateway and NAT instances in a separate video.
Now NAT is a process that only makes sense for IP version 4.
Since IP version 6 adds so many more addresses we don't need any form of private addressing and as such we don't need translation.
So try and remember this one IP version 6 generally means you don't need any form of network address translation.
Okay so now I want to step through each of the different methods graphically so you can understand how they work and I'm going to be starting with static network address translation or static NAT.
To illustrate this we want to use a visual example so let's start with a router and NAT gateway in the middle and a private network on the left and then a public network on the right.
We have a situation where we have two devices in the private network, a server and a laptop and both of these need access to external services and let's use the example of Netflix and the CAT API.
So the devices on the left they are private and this means they have addresses in the IP version 4, private address space in this case 10.0.0.10 for the server toward the top and 10.0.0.42 for the laptop toward the bottom.
This means that these two devices packets that they generate cannot be routed over the public internet because they only have private addressing.
Now the CAT API and Netflix both have public IP addresses in the case of the CAT API this is 1.3.7.
So the problem we have with this architecture is that the private addresses can't be routed over the public internet because they're private only.
The public addresses of the public internet-based services can't directly communicate with these private addresses because public and private addresses can't communicate over the public internet.
What we need is to translate the private addresses that these devices have on the left to public IP addresses which can communicate with the services on the right and vice versa.
Now with static NAT the router or NAT device maintains what's known as a NAT table and in the case of static network address translation the NAT table stores a one-to-one device mapping between private IP and public IP.
So any private device which is enabled will have a dedicated, allocated public IP version 4 address.
Now the private device won't have the public IP address configured on it, it's just an allocation.
So let's say that the laptop on the bottom left wants to communicate with Netflix.
Well to do so it generates a packet as normal.
The source IP of the packet is the laptop's private IP address and the destination IP of the packet is one of Netflix's IPs.
Let's say for this example we get issues in DNS.
Now the router in the middle is the default gateway for any destinations so any IP packets which are destined for anything but the local network are sent to this router.
Let's assume that we've allocated a public IP address to this laptop of 52.95.36.67.
So there's an entry in the NAT table containing 10.0.0.42 which is the private address and 52.95.36.67 which is the public address and these are statically mapped to one another.
In this case as the packet passes through the NAT device the source address of the packet is translated from the private address to the applicable public address and this results in this new packet.
So this new packet still has Netflix as the destination but now it has a valid public IP address as the source.
So because we've allocated this bottom laptop a public IP address as the packet moves through the NAT device the NAT device translates the source IP address of this packet from the private laptop's IP address to the allocated public address.
So this is an example of static NAT and for anyone who's interested in AWS this is the process which is performed by the internet gateway so one to one static network address translation.
Now this process works in a similar way in both directions.
So let's say that the API client so the server on the top left wants to communicate with the CAT API.
Well the same process is followed it generates a packet with the destination IP address of the CAT API and it sends it as it's passing through the NAT device the router replaces or translates the source address from the private IP address to the allocated public address.
In this case 52.95.36.68.
The CAT API once it receives the packet sees the source as this public IP so when it responds with data its packet has its IP address as the source and the previous public IP address as the destination the one which is allocated to the server on the top left.
So it sends this packet back to this public IP and remember this public IP is allocated by the NAT device in the middle to the private device at the top left of the API client.
So when this packet arrives at the NAT device the NAT table is checked it sees the allocation is for the server on the top left and so this time for incoming traffic the destination IP address is updated to the corresponding private IP address and then the packet is forwarded through to the private server.
This is how static networks public IPs are allocated to private IPs.
For outgoing traffic the source IP address is translated from the private address to the corresponding public address and for incoming traffic the destination IP address is translated from the allocated public address through to the corresponding private IP address.
Now at no point are the private devices configured with a public IP.
They always have private IP addresses and just to reiterate this is how the AWS internet gateway works which you'll either already know about or will learn about in a different video.
So this is static NAT now let's move on to dynamic NAT.
Okay so this is the end of part one of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part two will be continuing immediately from the end of part one.
So go ahead complete the video and when you're ready join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, this is part two of this lesson. We're going to continue immediately from the end of part one. So let's get started.
Now that you know the structure of a segment, let's take a look at how it's used within TCP.
Let's take a few minutes to look at the architecture of TCP.
TCP, like IP, is used to allow communications between two devices.
Let's assume a laptop and a game server.
TCP is connection-based, so it provides a connection architecture between two devices.
And let's refer to these as the client and the server.
Once established, the connection provides what's seen as a reliable communication channel between the client and the server, which is used to exchange data.
Now let's step through how this actually works, now that you understand TCP segments.
The actual communication between client and server, this will still use packets at layer three.
We know now that these are isolated.
They don't really provide error checking, any ordering, and they're isolated, so there's no association between each other.
There's no connection as such.
Because they can be received out of order, and because there are no ports, you can't use them in a situation where there will be multiple applications or multiple clients, because the server has no way of separating what relates to what.
But now we have layer four, so we can create segments.
Layer four takes data provided to it and chops that data up into segments, and these segments are encapsulated into IP packets.
These segments contain a sequence number, which means that the order of segments can be maintained.
If packets arrive out of order, that's okay, because the segments can be reordered.
If a packet is damaged or lost in transit, that's okay, because even though that segment will be lost, it can be retransmitted, and segments will just carry on.
TCP gives you this guaranteed reliable ordered set of segments, and this means that layer four can build on this platform of reliable ordered segments between two devices.
It means that you can create a connection between a client and the server.
In this example, let's assume segments are being exchanged between the client and the game server.
The game communicates to a TCP port 443 on the server.
Now, this might look like this architecturally, so we have a connection from a random port on the client to a well-known port, so 443 on the game server.
So between these two ports, segments are exchanged.
When the client communicates to the server, the source port is 23060, and the destination port is 443.
This architecturally is now a communication channel.
TCP connections are bi-directional, and this means that the server will send data back to the client, and to do this, it just flips the ports which are in use.
So then the source port becomes TCP443 on the server, and the destination port on the client is 23060.
And again, conceptually, you can view this as a channel.
Now, these two channels you can think of as a single connection between the client and the server.
Now, these channels technically aren't real, they're created using segments, so they build upon the concept of this reliable ordered delivery that segments provide, and give you this concept at a stream or a channel between these two devices over which data can be exchanged, but understand that this is really just a collection of segments.
Now, when you communicate with the game server in this example, you use a destination port of 443, and this is known as a well-known port.
It's the port that the server is running on.
Now, as part of creating the connection, you also create a port on your local machine, which is temporary, this is known as the ephemeral port.
This tends to use a higher port range, and it's temporary.
It's used as a source port for any segments that you send from the client to the server.
When the server responds, it uses the well-known port number as the source, and the ephemeral port as the destination.
It reverses the source and destination for any responses.
Now, this is important to understand, because from a layer 4 perspective, you'll have two sets of segments, one with a source port of 23060 and a destination of 443, and ones which are the reverse, so a source port of 443, and a destination of 23060.
From a layer 4 perspective, these are different, and it's why you need two sets of rules on a network ACL within AWS.
One set for the initiating part, so the laptop to the server, and another set for the response part, the server to the laptop.
When you hear the term ephemeral ports or high ports, this means the port range that the client picks as the source port.
Often, you'll need to add firewall rules, allowing all of this range back to the client.
Now, earlier, when I was stepping through TCP segment structure, I mentioned the flags field.
Now, this field contains, as the name suggests, some actual flags, and these are things which can be set to influence the connection.
So, Finn will finish a connection, Akk is an acknowledgement, and Sin is used at the start of connections to synchronize sequence numbers.
With TCP, everything is based on connections.
You can't send data without first creating a connection.
Both sides need to agree on some starting parameters, and this is best illustrated visually.
So, that's what we're going to do.
So, the start of this process is that we have a client and a server.
And as I mentioned a moment ago, before any data can be transferred using TCP, a connection needs to be established, and this uses a three-way handshake.
So, step one is that a client needs to send a segment to the server.
So, this segment contains a random sequence number from the client to the server.
So, this is unique in this direction of travel for segments.
And this sequence number is initially set to a random value known as the ISN or initial sequence number.
So, you can think of this as the client saying to the server, "Hey, let's talk," and setting this initial sequence number.
So, the server receives the segment, and it needs to respond.
So, what it does is it also picks its own random sequence number.
We're going to refer to this as SS, and it picks this as with the client side randomly.
Now, what it wants to do is acknowledge that it's received all of the communications from the client.
So, it takes the client sequence number, received in the previous segment, and it adds one.
And it sets the acknowledgement part of the segment that it's going to send to the CS plus one value.
What this is essentially doing is informing the client that it's received all of the previous transmission, so CS, and it wants it to send the next part of the data, so CS plus one.
So, it's sending this segment back to the client.
It's picking its own server sequence, so SS, and it's incrementing the client sequence by one, and it sends this back to the client.
So, in essence, this is responding with, "Sure, let's talk."
So, this type of segment is known as a SIN-AC.
It's used to synchronize sequence numbers, but also to acknowledge the receipt of the client sequence number.
So, when the first segment was called a SIN, to synchronize sequence numbers, the next segment is called a SIN-AC.
It serves two purposes.
It's also used to synchronize sequence numbers, but also to acknowledge the segment from the client.
The client receives the segment from the server.
It knows the server sequence, and so, to acknowledge to the server that it's received all of that information, it takes the server sequence, so SS, and it adds one to it, and it puts this value as the acknowledgement.
Then it also increments its own client sequence value by one, and puts that as the sequence, and then sends an acknowledgement segment, containing all this information through to the server.
Essentially, it's saying, "Autumn, let's go."
At this point, both the client and server agree on the sequence values.
The client has acknowledged the initial sequence value decided by the server, and the server has acknowledged the initial value decided by the client.
So, both of them are synchronized, and at this point, data can flow over this connection between the client and the server.
Now, from this point on, any time either side sends data, they increment the sequence, and the other side acknowledges the sequence value plus one, and this allows for retransmission when data is lost.
So, this is a process that you need to be comfortable with, so just make sure that you understand every step of this process.
Okay, so let's move on, and another concept which I want to cover is sessions, and the state of sessions.
Now, you've seen this architecture before, a client communicating with the game server.
The game server is running on a well-known port, so TCP 443, and the client is using an ephemeral port 23060 to connect with port 443 on the game server.
So, response traffic will come up from the game server, its source port will be 443, and it will be connecting to the client on destination port 23060.
Now, imagine that you want to add security to the laptop, let's say using a firewall.
The question is, what rules would you add?
What types of traffic would you allow from where and to where in order that this connection will function without any issues?
Now, I'm going to be covering firewalls in more detail in a separate video.
For now though, let's keep this high level.
Now, there are two types of capability levels that you'll encounter from a security perspective.
One of them is called a stateless firewall.
With a stateless firewall, it doesn't understand the state of a connection.
So, when you're looking at a layer 4 connection, you've got the initiating traffic, and you've got the response traffic.
So, the initiating traffic in light with the bottom, and the response traffic in red at the top.
With a stateless firewall, you need two rules.
A rule allowing the outbound segments, and another rule which allows the response segments coming in the reverse direction.
So, this means that the outbound connection from the laptop's IP, using port 23060, connecting to the server IP, using port 443.
So, that's the outgoing part.
And then the inbound response coming from the service IP on port 443, going to the laptop's IP on a femoral port 23060.
So, the stateless firewall, this is two rules, one outbound rule and one inbound rule.
So, this is a situation where we're securing an outbound connection.
So, where the laptop is connecting to the server.
If we were looking to secure, say, a web server, where connections would be made into our server, then the initial traffic would be inbound, and the response would be outbound.
There's always initiating traffic, and then the response traffic.
And you have to understand the directionality to understand what rules you need with a stateless firewall.
So, that's a stateless firewall.
And if you have any AWS experience, that's what a network access control list is.
It's a stateless firewall which needs two rules for each TCP connection, one in both directions.
Now, a stateless firewall is different.
This understands the state of the TCP segment.
So, with this, it sees the initial traffic and the response traffic as one thing.
So, if you allow the initiating connection, then you automatically allow the response.
So, in this case, if we allowed the initial outbound connection from the client laptop to the server, then the response traffic, the inbound traffic, would be automatically allowed.
In AWS, this is how a security group works.
The difference is that a stateless firewall understands level and the state of the traffic.
It's an extension of what a stateless firewall can achieve.
Now, this is one of those topics where there is some debate about whether this is layer four or layer five.
Layer four uses TCP segments and concerns itself with ID addresses and port numbers.
Strictly speaking, the concept of a session or an ongoing communication between two devices, that is layer five.
It doesn't matter if this level item can by layer four and layer five anyway, because it's just easier to explain.
But you need to remember the term stateless and the term stateful and how they change how you create security rules.
For this point, that's everything I wanted to cover.
So, go ahead and complete this video. And when you're ready, I'll look forward to you joining me in the next video of this series.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back. In this part of the series, I'm going to be primarily covering the transport layer, which is layer 4 of the OSI model.
I'm also going to be touching upon layer 5, which is the session layer of the OSI model, because there is some overlap for certain features, and so it's easier to cover them in one lesson.
The transport layer runs over the top of the network layer and provides most of the functionality, which supports most of the networking, which we use day-to-day on the internet.
The session layer runs on top of the transport layer, and many features, which you might use, are often mixed between these two layers.
Now, as I've already mentioned, it's not generally worth the argument of deciding whether things are covered in layer 4 or layer 5, so I'd only explain both of these layers as one grouping of functionality.
The OSI model is conceptual, after all, and many things exist between or across two different layers.
Now, we've got a lot to cover, so let's jump in and get started.
Before we get started with layer 4, I want to summarize the situation and limitations with layer 3.
Now, we have a functional layer 3, which means that we can communicate between two devices, say a source and destination laptop, using a source and destination IP address.
If both of these use public IP addresses, it doesn't matter where on the internet these devices are, layer 3 and IP routing will ensure that any packets generated and sent from the source laptop will move across any layer 2 networks between the source and destination.
Let's say that using layer 3, the source laptop on the top generates 6 IP packets, and these are all destined for the destination laptop at the bottom right.
The important thing to understand about layer 3 in this context is that each packet is a separate and isolated thing, and it's routed independently over the internet.
It might be logical to assume that the packets arrive in the same state, so the same timing, the same order, and the same quality, but sadly, that's not true.
In ideal conditions, yes, but generally, if you're communicating using only IP, then you're going to have intermittent network conditions, and that can result in a few cases where the arrival condition of packets is different than the condition when they were generated and sent.
One of the first things which we might encounter is out-of-order arrival.
In this case, where packet 3 arrives before packet 2, layer 3, specifically IP, provides no method to ensure the ordering of packet arrival.
For applications which only used IP, this would mean complex logic would need to be built into the application to ensure packets could be sequenced in the same way, and this is not a trivial task.
Because each packet is routed as an independent thing, it's possible packet 2 could have taken a slow, less efficient route, which is why it arrives later.
This is a negative of layer 3, which can be fixed at layer 4.
Another issue with layer 3 is that packets can just go missing.
This can be due to network outages or network conditions, which cause temporary routing loops.
Remember, when I talked about packet structure, I talked about the TTL field, which limited the number of hops a packet could go through.
Well, if the number of hops exceeds this, then it will be discarded.
With IP, there's no reliable method of ensuring packet delivery, and so it's a relatively regular occurrence that packets go missing.
Now, network conditions can also cause delay in delivery, and for any latency-sensitive applications, this can cause significant issues.
The key thing to keep in mind about layer 3, every packet is different.
It's single, it's isolated.
It's a different unit of data which is being routed across a layer 3 network using layer 2 networks as transit.
What happens to one packet might not happen or might happen in a different way to another packet.
Another limitation with layer 3, and this one is probably the one which has the most obvious effect, is that if you think back to the structure of IP packets, they have a source and destination field.
They don't have anything beyond that to distinguish channels of communication.
Packets from a source IP to a destination IP, they're all the same.
You couldn't have two applications running on the source IP, communicating with two applications running on the destination IP, because there's no method of distinguishing between the applications.
Any packet sent by one application would look to be the same as one sent by another.
Think about what you're doing on your device right now.
You might be watching a video.
Do you have a web browser open doing something else?
Do you have an SSH connection or email or any other application which uses the internet?
This means multiple applications, and IP on its own offers no way to separate the packets for individual applications.
This is something which is remedied at layer 4.
Lastly, IP has no flow control.
If a source device is transmitting packets faster than a destination device can receive them, then it can saturate the destination connection and cause loss of data, packets which will be dropped.
Now with only layer 3, we wouldn't have anywhere near the flexibility required to have the internet function in the way that it does.
For that, we need layer 4, and that's what I want to cover in this part of the lesson series.
So what is layer 4 and how does it function?
Let's take a look.
So far, this is what we have network model-wise.
We've discussed the physical layer which is layer 1 at the OSI model.
This relates to how raw bit screen data is transmitted to or received from physical shared media.
We've talked about layer 2 which adds identifiable devices, switches and media access control, but layer 2 ends with isolated layer 2 networks.
In the previous part of this lesson series, I introduced layer 3 which adds IP addressing and routing, so packets can be routed from source to destination across multiple interconnected networks.
Layer 4 builds on top of this.
It adds two new protocols, TCP which stands for transmission control protocol and UDP which stands for user datagram protocol.
Now both of these run on top of IP, and both of them add a collection of features depending on which one of them is used.
Now if you've heard the term TCP/IP, that means TCP running on top of IP.
At a high level, you would pick TCP when you want reliability, error correction and ordering of data.
It's used for most of the important application layer protocols such as HTTP, HTTPS, SSH and so on.
Now TCP is a connection-oriented protocol which means that you need to set up a connection between two devices and once set up, it creates a bidirectional channel of communications.
UDP on your hand is faster because it doesn't have the TCP overhead required for the reliable delivery of data.
This means that it's less reliable.
Now there's a great joke about UDP.
I will tell you about it, but you might not get it.
Anyway, it's a good job my lessons are better than my jokes.
In this lesson, I'm going to spend most of my time talking about TCP because it's used by more of the important protocols that you'll use day-to-day on the internet.
But just know that both TCP and UDP, they both run on top of IP and they're used in the same way.
They use IP as transit.
TCP just offers a more reliable connection-oriented architecture whereas UDP is all about performance.
So there's a simple trade-off.
Now for this lesson series, as I talk about, I'm going to be focusing on TCP because that's what's used for most of the important upper layer protocol.
So let's take a look at exactly how TCP works.
TCP introduces something called segments.
Now a segment is just another container for data like packets and frames before them.
Segments are specific to TCP.
Before we get started talking about the segments themselves, it's important to understand that segments are actually contained in which is known as encapsulated within IP packets.
So let's say that we have a stream of packets.
You know by now that these are all isolated packets.
They're just pieces of data which are routed independently from source to destination.
They're all treated separately.
Well, TCP segments are placed inside packets and the packets carry the segments from their source to their destination.
Segments don't have source or destination IP addresses because they use the IP packets for the transit from source to destination.
This is all handled by layer 3.
In this case, the internet protocol.
TCP segments add additional capabilities to IP packets.
Let's step through the structure of segments so that we can fully understand them.
And I'm going to skip past a few attributes of segments just as I did with layer 3 because there are some parts which are less important or less situational.
So I won't be covering either the options or padding fields within a segment.
The first fields which I want to cover are the source and destination ports.
In addition to the source and destination IP addresses that IP packets provide, TCP segments add source and destination ports.
And this gives the combined TCP/IP protocol the ability to have multiple streams of conversations at the same time between two devices.
When you open the AWS web interface, you're communicating from a port on your local machine to a port on the AWS servers, TCP port 443, which is HTTPS.
Now because of port, you can have multiple streams of communication from your machine.
One to AWS, one to Netflix, and one to this website where you're watching this video.
At the other side, AWS can have multiple streams of communication to their servers.
Each conversation is a unique combination of the source and destination IP, the source port, and the destination port.
All of these four values together identify as a single conversation, a single communications channel.
These two fields ought to allow the internet to function in a flexible way that it does.
It's why SSH and HTTPS can exist on the same EC2 instance and why you can have multiple SSH connections open to the same EC2 instance if you wanted to.
And I'll cover more on how this works as we move through this lesson.
Now next, within the segment, is sequence.
And the sequence number is incremented with each segment that's sent.
And it's unique.
It can be used for error correction if things need to be retransmitted.
You can use to ensure that one IP pass is received and the TCP segments are pulled out.
They can be correctly ordered.
So the sequence number is already uniquely identifying the particular segment within a particular connection so that both sides can make observations about it.
And the way that these observations are done is using app knowledgements.
The app knowledgement field on the right, on the one side, can indicate that it's received up to and including a certain sequence number.
Every segment which is transmitted needs to be acknowledged.
Remember that TCP is a reliable protocol and so if the device is transmitting segment one, two, three, and four to another device, then the other device needs to acknowledge that it's received segment one, two, three, and four.
And this is what the app knowledgement field is for.
So sequence number and app knowledgement are used hand in hand.
Next we have a field called flags and things.
Now within a segment, there is an actual flags component which is nine bits.
And this allows various controls over the TCP segments and the wider connection.
Flags are used to close the connection of the synchronized sequence numbers, but there's also additional things like a data offset and some reserved space.
So I thought this flags and things is essentially the flags plus a number of extra fields which I don't need to go into at this point in the lesson.
Now next we've got the TCP window.
And this is interesting.
This defines the number of bytes that you indicate that you're willing to receive between app knowledgements.
Once reached, the sender will pause until you acknowledge that amount of data.
And this is how flow control is implemented.
It lets the receiver control the rate at which the sender sends data.
If you use a smaller window, it provides additional levels of control over how quickly you're sent data.
Larger windows are more efficient because the header of a TCP segment takes up an amount of space and the smaller the window, the more headers are involved.
So this window setting is quite important if you're using a TCP for practical reasons, but we don't need to go into too much detail in this lesson.
Next we have checks on which is used for error checking.
It means that a TCP layer is able to detect errors and can arrange for retransmission of the data as required.
And then lastly, we have the urgent pointer.
And this is a cool feature.
Imagine if you have a data transfer application where 99% of data is the data being transferred and 1% is control traffic.
So communication between the client and the server, coordinating the actual data transfer.
While setting this field in a segment means that both sides can have separate processing.
So the control traffic always takes priority within the communication.
So any protocols which are latency sensitive and transfer data such as FTP and PellNet can use this field.
Now all of these fields together are known as the TCP header.
And the capacity of a TCP segment which remains is logically enough used for data.
So that's a segment that are placed inside packets and transmitted by one network stack, specifically layer 4 of one network stack and received by another network stack using the layer 4 protocol.
In this case TCP.
Okay so this is the end of part 1 of this lesson.
It was getting a little bit on the long side and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part 2 will be continuing immediately from the end of part 1.
So go ahead, complete video and when you're ready join me in part 2.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, this is part three of this lesson. We're going to continue immediately from the end of part two. So let's get started.
The address resolution protocol is used generally when you have a layer three packet and you want to encapsulate it inside a frame and then send that frame to a MAC address.
You don't initially know the MAC address and you need a protocol which can find the MAC address for a given IP address.
For example, if you communicate with AWS, AWS will be the destination of the IP packets.
But you're going to be forwarding via your home router which is the default gateway.
And so you're going to need the MAC address of that default gateway to send the frame to containing the packet.
And this is where ARP comes in.
ARP will give you the MAC address for a given IP address.
So let's step through how it works.
For this example, we're going to keep things simple.
We've got a local network with two laptops, one on the left and one on the right.
And this is a layer three network which means it has a functional layer two and layer one.
What we want is the left laptop which is running a game and it wants to send the packets containing game data to the laptop on the right.
This laptop has an IP address of 133.33.3.10.
So the laptop on the left takes the game data and passes it to its layer three which creates a packet.
This packet has its IP address as the source and the right laptop as the destination.
So 133.33.3.10.
But now we need a way of being able to generate a frame to put that packet in for transmission.
We need the MAC address of the right laptop.
This is what ARP or the address resolution protocol does for us.
It's a process which runs between layer two and layer three.
It's important to point out at this point that now you know how devices can determine if two IP addresses are on the same local network.
In this case, the laptop on the left because it has its subnet mask and IP address as well as the IP address of the laptop on the right.
It knows that they're both on the same network.
And so this is a direct local connection.
Routers aren't required.
We don't need to use any routers for this type of communication.
Now ARP broadcasts on layer two.
It sends an ARP frame to all Fs as a MAC address.
And it's asking who has the IP address 133.33.3.10 which is the IP address of the laptop on the right.
Now the right laptop because it has a full layer one, two and three networks stack is also running the address resolution protocol.
The ARP software sees this broadcast and it responds by saying I'm that IP address.
I'm 133.33.3.10.
Here's my MAC address ending 5B colon 7, 8.
So now the left laptop has the MAC address of the right one.
Now it can use this destination MAC address to build a frame, encapsulate the packet in this frame.
And then once the frame is ready, it can be given to layer one and sent across the physical network to layer one of the right laptop.
Layer one of the right laptop receives this physical orbit stream and hands it off to the layer two software also on the right laptop.
Now it's layer two software reviews the destination MAC address and sees that it's destined for itself.
So it strips off the frame and it sends the packet to its layer three software.
Layer three reviews the packet, sees that it is the intended destination and it de-encapsulates the data.
So strips away the packet and hands the data back to the game.
Now it's critical to understand as you move through this lesson series, even if two devices are communicating using layer three, they're going to be using layer two for local communications.
If the machines are on the same local network, then it will be one layer two frame per packet.
But if you'll see in a moment if the two devices are remote, then you can have many different layer two frames which are used along the way.
And ARP, or the address resolution protocol, is going to be essential to ensure that you can obtain the MAC address for a given IP address.
This is what facilitates the interaction between layer three and layer two.
So now that you know about packets, now that you know about subnet masks, you know about routes and route tables, and you know about the address resolution protocol or ARP, let's bring this all together now and look at a routing example.
So we're going to go into a little bit more detail now.
In this example, we have three different networks.
We've got the orange network on the left, we've got the green network in the middle, and then finally the pink network on the right.
Now between these networks are some routers.
Between the orange and green networks is router one, known as R1, and between the green and pink networks is router two, known as R2.
Each of these routers has a network interface in both of the networks that it touches.
Routers are layer three devices, which means that they understand layer one, layer two, and layer three.
So the network interfaces in each of these networks work at layer one, two, and three.
In addition to this, we have three laptops.
We've got two in the orange network, so device one at the bottom and device two at the top, and then device three in the pink network on the right.
Okay, so what I'm going to do now is to step through two different routing scenarios, and all of this is bringing together all of the individual concepts which I've covered at various different parts of this part of the lesson series.
First, let's have a look at what happens when device one wants to communicate with device two using its IP address.
First, device one is able to use its own IP address and subnet mask together with device two's IP address, and calculate that they're on the same local network.
So in this case, router R1 is not required.
So a packet gets created called P1 with a D2 IP address as the destination.
The address resolution protocol is used to get D2's MAC address, and then that packet is encapsulated in a frame with that MAC address as the destination.
Then that frame is sent to the MAC address of D2.
Once the frame arrives at D2, it checks the frame, hits the destination, so it accepts it and then strips the frame away.
It passes the packet to layer three.
It sees that it's the destination IP address, so it strips the packet away and then passes the game data to the game.
Now all of this should make sense.
This is a simple local network communication.
Now let's step through a remote example.
Device two communicating with device three.
These are on two different networks.
Device two is on the orange network, and device three is on the pink network.
So first, the D2 laptop, it compares its own IP address to the D3 laptop IP address, and it uses its subnet mask to determine that they're on different networks.
Then it creates a packet P2, which has the D3 laptop as its destination IP address.
It wraps this up in a frame called F2, but because D3 is remote, it knows it needs to use the default gateway as a router.
So for the destination MAC address of F2, it uses the address resolution protocol to get the MAC address of the local router R1.
So the packet P2 is addressed to the laptop D3 in the pink network, so the packet's destination IP address is D3.
The frame F2 is now addressed to the router R1 at MAC address, so this frame is sent to router R1.
R1 is going to see that the MAC address is addressed to itself, and so it will strip away the frame F2, leaving just the packet P2.
Now a normal network device such as your laptop or phone, if it received a packet which wasn't destined for it, it would just drop that packet.
A router though, it's different.
The router's job is to route packets, so it's just fine to handle a packet which is addressed somewhere else.
So it reviews the destination of the packet P2, it sees that it's destined for laptop D3, and it has a route for the pink network in its route table.
It knows that for anything destined for the pink network, then router R2 should be the next hop.
So it takes packet P2 and it encapsulates it in a new frame F3.
Now the destination MAC address of this frame is the MAC address of router R2, and it gets this by using the address resolution protocol or ARP.
So it knows that the next hop is the IP address of router R2, and it uses ARP to get the MAC address of router R2, and then it sends this frame off to router R2 as the next hop.
So now we're in a position where router R2 has this frame F3 containing the packet P2 destined for the machine inside the pink network.
So now the router R2 has this frame with the packet inside.
It sees that it's the destination of that frame.
The MAC address on the frame is its MAC address, so it accepts the frame and it removes it from around packet P2.
So now we've just got packet P2 again.
So now router R2 reviews the packet and it sees that it's not the destination, but that doesn't matter because R2 is a router.
It can see that the packet is addressed to something on the same local network, so it doesn't need to worry anymore about routing.
Instead, it uses ARP to get the MAC address of the device with the intended destination IP address, so laptop D3.
It then encapsulates the packet P2 in a new frame, F4, whose destination MAC address is that of laptop D3, and then it sends this frame through to laptop D3. laptop D3 receives the frame, D3 sees that it is the intended destination of the frame because the MAC address matches its MAC address.
It strips off the frame, it also sees that it's the intended destination of the IP packet, it strips off the packet, and then the data inside the packet is available for the game that's running on this laptop.
So it's a router's job to move packets between networks.
Router's doing this by reviewing packets, checking route tables for the next hop or target addresses, and then adding frames to allow the packets to pass through intermediate layer 2 networks.
A packet during its life might move through any number of layer 2 networks and be re-encapsulated many times during its trip, but normally the packet itself remains unchanged all the way from source to destination.
A router is just a device which understands physical networking, it understands data link networking, and it understands IP networking.
So that's layer 3, the network layer, and let's review what we've learned quickly before we move on to the next layer of the OSI model.
Now this is just an opportunity to summarize what we've learned, so at the start of this video, at layer 2 we had media access control, and we had device to device or device to all device communications, but only within the same layer 2 network.
So what does layer 3 add to this?
Well it adds IP addresses, either version 4 or version 6, and this is cross network addressing.
It also adds the address, resolution, protocol, or ARP, which can find the MAC address for this IP address or for a given IP address.
Layer 3 adds routes, which define where to forward a packet to, and it adds route tables, which contain multiple routes.
It adds the concept of a device called a router, which moves packets from source to destination, encapsulating these packets in different layer 2 frames along the way.
This altogether allows for device to device communication over the internet, so you can access this video, which is stored on a server, which has several intermediate networks away from your location.
So you can access this server, which has an IP address, and packets can move from the server through to your local device, crossing many different layer 2 networks.
Now what IP doesn't provide?
It provides no method of individual channels of communication.
Layer 3 provides packets, and packets only have source IP and destination IP, so for a given two devices, you can only have one stream of communication, so you can't have different applications on those devices communicating at the same time.
And this is a critical limitation, which is resolved by layers 4 and above.
Another element of layer 3 is that in theory packets could be delivered out of order.
Individual packets move across the internet through intermediate networks, and depending on network conditions, there's no guarantee that those packets will take the same route from source to destination, and because of different network conditions, it's possible they could arrive in a different order.
And so if you've got an application which relies on the same ordering at the point of receipt as at the point of transmission, then we need to add additional things on top of layer 3, and that's something that layer 4 protocols can assist with.
Now at this point we've covered everything that we need to for layer 3.
There are a number of related subjects which I'm going to cover in dedicated videos, such as network address translation, and how the IP address space functions, as well as IP version 6, which in this component of the lesson series, we've covered how the architecture of layer 3 of the OSI model works.
So at this point, go ahead and complete this video, and then when you're ready, I'll look forward to you joining me in the next part of this lesson series where we're going to look at layer 4.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, this is part two of this lesson.
We're going to continue immediately from the end of part one, so let's get started.
Now we talked about the source and destination IP address of these packets, so now let's focus on IP addressing itself.
IP addressing is what identifies a device which uses layer 3 IP networking.
Now I'll talk more about how IP addressing is decided upon and assigned in another video, for now I want you to fully understand the structure of an IP address.
In this video I'll be focusing on IP version 4, because I have a separate video which will cover IP version 6 in depth.
This is an IP address, 133.33.3.7.
From a pure network connectivity point of view, if you have a valid IP version 4 address, you can send packets to 133.33.3.7 and they will at least start on the journey of getting to this destination.
Now there might be blocks in the way, so firewalls or other security restrictions, all the IP could be offline, but packets will move from you over the internet on their way to this IP address.
Now this format is known as dotted decimal notation.
It's four decimal numbers from 0 to 255 separated by dots.
So 133.33.3.7.
Now all IP addresses are actually formed of two different parts.
There's the network part which states which IP network this IP address belongs to, and then the host part which represents hosts on that network.
So in this example the network is 133.33, and then the hosts on that network can use the remaining part of the IP.
In this case 3.7 is one device on that network, a laptop.
A really important part of understanding how your data gets from your location to a remote network is the given two IP addresses.
How do you tell if they're on the same IP network or different IP networks?
If the network part of the IP address matches between two different IP addresses, then they're on the same IP network.
If not, they're on different IP networks.
So you need to be able to calculate where you've an IP address, which part of that address is the network, and which part is the host.
And by the end of this lesson you will know how to do that.
Now IP addresses are not actually dotted decimal.
That's how they're represented for humans.
They're actually binary numbers.
Each decimal part of the IP address is an 8-bit binary number.
There are four of these per IP version 4 address, and this means that an entire IP address is 32 bits in size.
So four sets of 8 bits, and each of these 8 bits is known as an octet.
You might hear somebody refer to say the first and second octet of an IP address, and this is always read left to right.
The first octet in this example is 1, 3, 3, or in binary 1, 0, 0, 0, 1, 0, 1.
And the second octet is 33, which in binary is 0, 0, 1, 0, 0, 0, 0, 1.
Now this binary conversion, this is not something which I'm going to cover in this lesson, but I will make sure there's a link attached to the lesson which shows you how to do it.
It's just decimal to binary maths, and once you know how it's done, it's really easy to do, even in your head.
Now I'm going to talk about how you can determine which IPs are on the same network next, but I wanted to introduce the format of IP addresses first.
In this example, this IP address has what's known as a /16 prefix.
This means that the first 16 bits represent the network, and the rest are for hosts.
Now I don't really talk about how this works in detail coming up next.
Because the first 16 bits are network, it means that the second IP address is 1, 3, 3, .33, .33, .37, because the network part of that matches is 1, 3, 3, .33, and it's on the same IP network.
I'm going to detail coming up next how this calculation is done.
For now, I want you to be comfortable knowing that if the network component of two IP addresses match, then devices are local.
If they don't match, then devices are remote.
That matters when we start covering IP routing.
Now IP addresses are networks.
These are either statically assigned by humans, and this is known as a static IP, or they're assigned automatically by machines.
So service on your network running DHCP service software.
Now DHCP stands for Dynamic Host Configuration Protocol, and this is something I'll be covering in detail in a separate video.
On a network, IP addresses need to be unique, or bad things happen.
Globally, in most cases, IP addresses need to be unique, or also bad things happen.
So keep that in mind.
Generally, when you're dealing with IP addresses, you want them to be unique, especially on your local network.
Now let's talk about subnet masks, because these are what helps us determine if IP addresses are local to each other or remote.
Subnet masks are a critical part of IP networking.
They're configured on layer 3 interfaces, along with IP addresses.
What's also configured on most network interfaces is a default gateway.
This is an IP address on a local network, which packets are forwarded to, generally, if the intended destination is not a local IP address.
Subnet masks are what allow an IP device to know if an IP address which it's communicating with is on the same network or not, and that influences if the device attempts to communicate directly on the local network, or if it needs to use the default gateway.
On your home network, for example, your internet router is likely set as your default gateway, so when you browse to Netflix.com or interact with AWS because the IP addresses that you're talking to are not local, then packets from your machine are passed to your router, which is the default gateway.
So let's say that we have an IP address, 133.33.3.7.
Now this alone is just a single address.
We don't know which part of it is the network and which part of it is the host component.
I just finished talking about how IP addresses can match binary numbers.
This IP address in binary is 1-0-0-0-1-0-1, so that's the first octet, and then 0-0-1-0-0-1, that's the second octet, and then 0-0-0-0-0-0-1-1, that's the third octet, and then finally 0-0-0-0-0-1-1-1, and that's the fourth octet, and that represents 133.33.3.7.
So as a reminder, if we're dealing manually with subnet masks, and remember this is something that's generally performed in software by your networking stack, the first thing we need to do is convert the dotted decimal notation into a binary number.
Now along with this IP address, we would generally also configure either statically or using DHCP, a subnet mask.
In this example, the subnet mask that we have is 255.255.0.0 or /16, and these mean the same thing, and I'll show you why over the next few minutes.
A subnet mask represents which part of the IP is for the network.
It helps you, or more often a machine, know which part of an IP address is which.
To use a subnet mask, you first have to convert it into binary, so 255.255.0.0 is this in binary.
We convert it just like an IP address.
So the first octet is all 1s, the second octet is all 1s, the third and fourth octet are all 0s.
The /16, which is known as the prefix, this is just shorthand.
It's the number of 1s in the subnet mask starting from the left.
So /16 simply means 16 1s, which is the same as 255.255.0.0 when you convert that into binary.
Now when you have the subnet mask in binary, anything with a 1 represents the network, anything with a 0 represents the host component.
So if you overlay a subnet mask and an IP address, both of them in binary, it becomes really easy to tell which part is which.
Something else which is really cool is that for a given network, you can calculate the start and end IP addresses of that network.
Take for example, the IP address that's on screen now, so 133.33.3.7.
Well we've converted that into binary and we've also converted the subnet mask of 255.255.0.0 also into binary.
So that's in blue, right below the binary IP address.
To calculate the start of the network, we begin with the network part of the IP address and then for the host part, we have all 0s.
So let's look at what we've done.
The subnet mask, where there are 1s, this is the network part.
So we take the original IP address and where the subnet mask has 1s, that's the network part, so 133.33.
Then for the part which is hosts, which is where the subnet mask shows 0s, then we have all 0s.
This means that the network starting point is 133.33.0.0.
Now to find the end, we take the network component of the IP address again, so where the subnet mask is all 1s, that's what we start with.
And to work out the end of the network, we take the host component, so where the subnet mask is 0s, and we have all 1s in the IP address.
So the ending part of this network is 133.33.255.255.
So the starting address of a network is the network component of the IP address, identified with the subnet mask, and then all 0s for the host part of the IP address, and the ending address of the network is the network part of the IP address to start with, and then for the host component, we have all 1s.
So this is how subnet masks work.
They're used to identify which part of an IP address is the network part and which is the host part.
As long as the network part for two different IP addresses is the same, then we know that both of those IP addresses are on the same IP network, and this is essential so that the machine can identify when it can send data directly on the same local network, or when IP routing needs to be used to transfer packets across different intermediate networks.
So it's how your local device, your local laptop, knows to send packets to your internet router for Netflix or AWS, rather than trying to look for both of those systems locally on your local area network.
And that's how a router makes that decision too, when it's looking where to forward packets to.
So using subnet masks and IP addresses, it's how a lot of the intelligence of layer 3 is used.
Now next, I want to spend some time looking at route tables and routes.
Let's step through an example of data moving from you to AWS, and I want to keep focus for now on how a router makes a decision where to send data.
Packets that you create for AWS will move from your house into your internet provider across the internet, potentially even between countries, and then finally arrive at AWS.
Let's step through a simple example.
So we start with our house on the left.
Next, we have our internet provider known as an ISP or Internet Service Provider, and let's call this Meow ISP, and then we have three destination networks.
We have AWS, our ISP's upstream provider, and then Netflix.
Now we want to communicate with AWS, and so we create a packet on our local device, which has our IP address 1.3.3.7 as the source IP address, and it has a destination IP address of 52.217.13.37.
Now you're going to have an internet router within your home, and this is where your device will send all of its data through.
That router has what's known as a default route, which means all IP traffic is sent to it on its way to Meow ISP.
Now I'll explain what a default route is in a second.
For now, just assume that all data that you generate within your local network by default is sent through to your internet service provider.
So now the packet that you've generated is inside your internet service provider on a router, and this router has multiple network interface cards connecting to all of those remote networks.
Now let's assume in those remote networks is another router, and each of these routers uses the dot 1 IP address in each of those networks.
So how does the ISP router inside Meow ISP know where to forward your data to?
Well, it uses routes and route tables.
Every router will have at least one route table.
It could have more, which are attached to individual network interfaces, but for now let's keep things simple and assume that the router within our ISP has a single route table, and it will look something like this.
A route table is a collection of routes.
Each row in this table is an example route.
It will have a destination field, and it will have a next hop or a target field.
What happens is that every packet which arrives at this router, the router will check the packet's destination.
What IP address is this packet destined for?
And in this example, it's 52.217.13.37.
Now at this point, the router will look for any routes in the route table which match the destination IP address of this packet.
If multiple routes match, then it will prefer ones which are more specific.
The two routes in yellows at the top and the bottom, these are examples of fairly specific routes.
The one in blue in the middle is the inverse, this is not a specific route.
The larger the prefix, so the higher the number after the slash, the more specific the route.
So a slash 32 is the most specific, and a slash 0 is the least specific.
A slash 32 actually represents one single IP address, and a slash 0, well this represents all IP addresses.
A slash 24 means that the first 24 bits are for the network, and the last 8 bits are for the host.
So this matches a network of 256 IP addresses.
So for this packet that we have with the destination of 52.217.13.37, we've got two routes which match.
The top route, which is 52.217.13.0/24, that network contains the IP address which our packet is destined for.
So this matches.
But also the middle route, 0.0.0/0, this matches, because this matches all IP addresses.
The middle route is known as a default route.
I mentioned before the packets from our home network on the left arrive at our ISP because there's a default route.
Well this 0.0.0/0 is an example of a default route.
This will match if nothing else does.
Because we have two more specific routes in this route table, so the top and bottom, if either of those match, they will be selected rather than the default route in the middle.
In this case the bottom route doesn't match our particular packet, only the top one matches.
And so the top route will be selected because it's more specific than the default route.
Now for the route that's selected, so the top route, it has a next hop or target field.
This is the IP address which the packet is going to be forwarded to, to get one step closer through to its destination.
Or in this case to arrive at the actual destination.
And so the packet is forwarded through to this address.
Routing as a process is where packets are forwarded or routed hop by hop across the internet from source to destination.
Route tables are the thing which enables this.
Route tables can be statically populated, or there are protocols such as BGP or the border gateway protocol, which allow routers to communicate with each other to exchange which networks they know about.
And this is how the core of the internet functions.
One important thing that you need to understand though, is that when our ISP router is forwarding the packet through to the AWS router, it's forwarding it at layer 2.
It wraps the packet in a frame.
The packet doesn't change.
The frame though, it has the AWS routers MAC address as its destination.
That's how the packet gets to the AWS router.
But how do we determine the MAC address of the AWS router in this example?
For that we use something called the address resolution protocol, and that's what I'm going to be covering next.
This is the end of part 2 of this lesson.
It's a pretty complex lesson, and so I wanted to give you the opportunity to take a small break, maybe stretch your legs, or make another coffee.
Part 3 will continue immediately from this point, so go ahead, complete this video, and when you're ready, I look forward to you joining me in part 3.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
Now that we've covered the physical and data link layers, next we need to step through layer 3 of the OSI model, which is the network layer.
As I mentioned in previous videos, each layer of the OSI model builds on the layers below it, so layer 3 requires one or more operational layer 2 networks to function.
The job of layer 3 is to get data from one location to another.
When you're watching this video, data is being moved from the server hosting the video through to your local device.
When you access AWS or stream from Netflix, data is being moved across the internet, and it's layer 3 which handles this process of moving data from a source to a destination.
To appreciate layer 3 fully, you have to understand why it's needed.
So far in the series, I've used the example of 2-4 friends playing the game on a local area network.
Now what if we extended this, so now we have 2 local area networks and they're located with some geographic separation.
Let's say that one is on the east coast of the US and another is on the west coast, so there's a lot of distance between these 2 separate layer 2 networks.
Now LAN1 and LAN2 are isolated layer 2 networks at this point.
Devices on each local network can communicate with each other, but not outside of that local layer 2 network.
Now you could pay for and provision a point-to-point link across the entire US to connect these 2 networks, but that would be expensive, and if every business who had multiple offices needed to use point-to-point links, it would be a huge mess and wouldn't be scalable.
Additionally, each layer 2 network uses a shared layer 2 protocol.
In the example so far, this has been Ethernet.
Any networks where only using layer 2, if we want them to communicate with each other, they need to use the same layer 2 protocol to communicate with another layer 2 network.
Now not everything uses the same layer 2 protocol, this presents challenges, because you can't simply join 2 layer 2 networks together, which use different layer 2 protocols and have them work out of the box.
With the example which is on screen now, imagine if we had additional locations spread across the continental US.
Now in between these locations, let's add some point-to-point links, so we've got links in pink which are tabled connections, and these go between these different locations.
Now we also might have point-to-point links which use a different layer 2 protocol.
In this example, let's say that we had a satellite connection between 2 of these locations.
This is in blue, and this is a different layer 2 technology.
Now Ethernet is one layer 2 technology which is generally used for local networks.
It's the most popular wired connection technology for local area networks.
But for point-to-point links and other long distance connections, you might also use things such as PPP, MPLS or ATM.
Not all of these use frames with the same format, so we need something in common between them.
Layer 2 is the layer of the OSI stack which moves frames, it moves frames from a local source to a local destination.
So to move data between different local networks, which is known as inter-networking, this is where the name internet comes from.
We need a layer 3.
Layer 3 is this common protocol which can span multiple different layer 2 networks.
Now layer 3 or the network layer can be added onto one or more layer 2 networks, and it adds a few capabilities.
It adds the internet protocol or IP.
You get IP addresses which are cross-networking addresses, which you can assign to devices, and these can be used to communicate across networks using routing.
So the device that you're using right now, it has an IP address.
The server which stores this video, it too has an IP address.
And the internet protocol is being used to send requests from your local network across the internet to the server hosting this video, and then back again.
IP packets are moved from source to destination across the internet through many intermediate networks.
Devices called routers, which are layer 3 devices, move packets of data across different networks.
They encapsulate a packet inside of an ethernet frame for that part of the journey over that local network.
Now encapsulation just means that an IP packet is put inside an ethernet frame for that part of the journey.
Then when it needs to be moved into a new network, that particular frame is removed, and a new one is added around the same packet, and it's moved onto the next local network.
So as this video data is moving from my server to you, it's been wrapped up in frames.
Those frames are stripped away, new frames are added, all while the packets of IP data move from my video server to you.
So that's why IP is needed at a high level, to allow you to connect to all that remote networks, crossing intermediate networks on the way.
Now over the coming lesson, I want to explain the various important parts of how layer 3 works.
Specifically IP, which is the layer 3 protocol used on the internet.
Now I'm going to start with the structure of packets, which are the data units used within the internet protocol, which is a layer 3 protocol.
So let's take a look at that next.
Now packets in many ways are similar to frames.
It's the same basic concept.
They contain some data to be moved, and they have a source and destination address.
The difference is that with frames, both the source and destination are generally local.
With an IP packet, the destination and source addresses could be on opposite sides of the planet.
During their journey from source to destination packets remain the same, as they move across layer 2 networks.
They're placed inside frames, which is known as encapsulation.
The frame is specific to the local network that the packet is moving through, and changes every time the packet moves between networks.
The packet though doesn't change.
Normally it's constant for the duration for its entire trip between source and destination.
Although there are some exceptions that I'll be detailing in a different lesson, when I talk about things like network address translation.
Now there are two versions of the internet protocol in use.
Version 4, which has been used for decades, and version 6, which adds more scalability.
And I'll be covering version 6 and its differences in a separate lesson.
An IP packet contains various different fields, much like frames that we discussed in an earlier video.
At this level there are a few important things within an IP packet which you need to understand, and some which are less important.
Now let's just skip past the less relevant ones.
I'm not saying any of these are unimportant, but you don't need to know exactly what they do at this introductory level.
Things which are important though, every packet has a source and destination IP address field.
The source IP address is generally the device IP which generates the packet, and the destination IP address is the intended destination IP for the packet.
In the previous example we have two networks, one east coast and one west coast.
The source might be a west coast PC, and the destination might be a laptop within the east coast network.
But crucially these are both IP addresses.
There's also the protocol field, and this is important because IP is layer 3.
It generally contains data provided by another layer, a layer 4 protocol, and it's this field which stores which protocol is used.
So examples of protocols which this might reference are things like ICMP, TCP or UDP.
If you're storing TCP data inside a packet this value will be 6, for PINs known as ICMP this value will be 1, and if you're using UDP as a layer 4 protocol then this value will be 17.
This field means that the network stack at the destination, specifically the layer 3 component of that stack, will know which layer 4 protocol to pass the data into.
Now the bulk of the space within a packet is taken up with the data itself, something that's generally provided from a layer 4 protocol.
Now lastly there's a field called time to live or TTL.
Remember the packets will move through many different intermediate networks between the source and the destination, and this is a value which defines how many hops the packet can move through.
It's used to stop packets looping around forever.
If for some reason they can't reach their destination then this defines a maximum number of hops that the packet can take before being discarded.
So just in summary a packet contains some data which it carries generally for layer 4 protocols.
It has a source and destination IP address, the IP protocol implementation which is on routers moves packets between all the networks from source to destination, and it's these fields which are used to perform that process.
As packets move through each intermediate layer 2 network, it will be inserted or encapsulated in a layer 2 frame, specific for that network.
A single packet might exist inside tens of different frames throughout its route to its destination, one for every layer 2 network or layer 2 point to point link which it moves through.
Now IP version 6 from a packet structure is very similar, we also have some fields which matter less at this stage.
They are functional but to understand things at this level it's not essential to talk about these particular fields.
And just as with IP version 4, IP version 6 packets also have both source and destination IP address fields.
But these are bigger IP version 6 addresses are bigger which means there are more possible IP version 6 addresses.
And I'm going to be covering IP version 6 in detail in another lesson.
It means though that space taken in a packet to store IP version 6 source and destination addresses is larger.
Now you still have data within an IP version 6 packet and this is also generally from a layer 4 protocol.
Now strictly speaking if this were to scale then this would be off the bottom of the screen, but let's just keep things simple.
We also have a similar field to the time to live value within IP version 4 packets, which in IP version 6 this is called the hop limit.
Functionally these are similar, it controls the maximum number of hops that the packet can go through before being discarded.
So these are IP packets, generally they store data from layer 4 and they themselves are stored in one or more layer 2 frames as they move around networks or links which fall on the internet.
Okay so this is the end of part 1 of this lesson.
It was getting a little bit on the long side and I wanted to give you the opportunity to take a small break, maybe stretch your legs or make a coffee.
Now part 2 will continue immediately from this point, so go ahead complete this video and when you're ready I look forward to you joining me in part 2.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this part of the lesson series I'm going to be discussing layer one of the seven layer OSI model which is the physical layer.
Imagine a situation where you have two devices in your home let's say two laptops and you want to play a local area network or LAN game between those two laptops.
To do this you would either connect them both to the same Wi-Fi network or you'd use a physical networking cable and to keep things simple in this lesson I'm going to use the example of a physical connection between these two laptops so both laptops have a network interface card and they're connected using a network cable.
Now for this part of the lesson series we're just going to focus on layer one which is the physical layer.
So what does connecting this network cable to both of these devices give us?
Well we're going to assume it's a copper network cable so it gives us a point-to-point electrical shared medium between these two devices so it's a piece of cable that can be used to transmit electrical signals between these two network interface cards.
Now physical medium can be copper in which case it uses electrical signals it can be fiber in which case it uses light or it can be Wi-Fi in which case it uses radio frequencies.
Whatever type of medium is used it needs a way of being able to carry unstructured information and so we define layer one or physical layer standards which are also known as specifications and these define how to transmit and receive raw bitstream so ones and zeros between a device and a shared physical medium in this case the piece of copper networking cable between our two laptops so the standard defines things like voltage levels, timings, data rates, distances which can be used, the method of modulation and even the connector type on each end of the physical cable.
The specification means that both laptops have a shared understanding of the physical medium so the cable.
Both can use this physical medium to send and receive raw data.
For copper cable electrical signals are used so a certain voltage is defined as binary 1 say 1 volt and a certain voltage as binary 0 say -1 volt.
If both network cards in both laptops agree because they use the same standard then it means that zeros and ones can be transmitted onto the medium by the left laptop and received from the medium by the right laptop and this is how two networking devices or more specifically two network interface cards can communicate at layer one.
If I refer to a device as layer X so for example layer one or layer three then it means that the device contains functionality for that layer and below so a layer one device just understands layer one and a layer three device has layers one, two and three capability.
Now try to remember that because it's going to make much of what's coming over the remaining videos of this series much easier to understand.
So just to reiterate what we know to this point we've taken two laptops we've got two layer one network interfaces and we've connected them using a copper cable a copper shared medium and because we're using a layer one standard it means that both of these cards can understand the specific way that binary zeros and ones are transmitted onto the shared medium.
Now on the previous screen I use the example of two devices so two laptops with network interface cards communicating with each other.
Two devices can use a point-to-point layer one link a fancy way of talking about a network cable but what if we need to add more devices a two-player game isn't satisfactory we need to add two more players for a total of four.
Well we can't really connect these four devices to a network cable with only two connectors but what we can do is to add a networking device called a hub in this example it's a four-port hub and the laptop on the left and right instead of being connected to each other directly and now connected to two ports of that hub because it's a four-port hub this also means that it has two ports free and so it can accommodate the top and bottom laptops.
Now hubs have one job anything which the hub receives on any of its ports is retransmitted to all of the other ports including any errors or collisions.
Conceptually a hub creates a four connector network cable one single piece of physical medium which four devices can be connected to.
Now there are a few things that you really need to understand at this stage about layer one networking.
First there are no individual device addresses at layer one one laptop cannot address traffic directly at another it's a broadcast medium the network card on the device on the left transmits onto the physical medium and everything else receives it it's like shouting into a room with three other people and not using any names.
Now this is a limitation but it is fixed by layer two which will cover soon in this lesson series.
The other consideration is that it is possible that two devices might try and transmit at once and if that happens there will be a collision this corrupts any transmissions on the shared medium only one thing can transmit at once on a shared medium and be legible to everything else if multiple things transmit on the same layer one physical medium then collisions occur and render all of the information useless.
Now related to this layer one has no media access control so no method of controlling which devices can transmit so if you decide to use a layer one architecture so a hub and all of the devices which is shown on screen now then collisions are almost guaranteed and the likelihood increases the more layer one devices are present on the same layer one network.
Layer one is also not able to detect when collisions occur remember these network cards are just transmitting via voltage changes on the shared medium it's not digital they can in theory all transmit at the same time and physically that's okay it means that nobody will be able to understand anything but at layer one it can happen so layer one is done it doesn't have any intelligence beyond defining the standards that all of the devices will use to transmit onto the shared medium and receive from the shared medium because of how layer one works and because of how a hub works because it simply retransmits everything even collisions then the layer one network is said to have one broadcast and one collision domain and this means that layer one networks tend not to scale very well the more devices are added to a layer one network the higher the chance of collisions and data corruption.
Now layer one is fundamental to networking because it's how devices actually communicate at a physical level but for layer one to be useful for it to be able to be used practically for anything else then we need to add layer two and layer two runs over the top of a working layer one connection and that's what we'll be looking at in the next part of this lesson series.
As a summary of the position that we're in right now assuming that we have only layer one networking we know that layer one focuses on the physical shared medium and it focuses on the standards for transmitting onto the medium and receiving from the shared medium so all devices which are part of the same layer one network need to be using the same layer one medium and device standards generally this means a certain type of network card and a certain type of cable or it means why vicar's using a certain type of antennas and frequency ranges what layer one doesn't provide is any form of access control of the shared medium and it doesn't give us uniquely identifiable devices and this means we have no method for device to device communication everything is broadcast using transmission onto the shared physical medium.
Now in the next video of this series I'm going to be stepping through layer two which is the data link layer and this is the layer which adds a lot of intelligence on top of layer one and allows device to device communication and it's layer two which is used by all of the upper layers of the OSI model to allow effective communication but it's important that you understand how layer one works because this physically is how data moves between all devices and so you need to have a good fundamental understanding of layer one.
Now this seems like a great place to take a break so I'm going to end this video here so go ahead and complete this video and then when you're ready I look forward to you joining me in the next part of this lesson series where we'll be looking at layer two or the data link layer.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, this is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Now the only thing that remains is just to test out this configuration.
And to do that we're going to launch an EC2 instance into the WebA subnet.
So click on services and just type EC2 to move across to the EC2 console.
Now once we're on the EC2 console, just click on launch instance.
Then you'll be taken to the launch instance console.
Into the name box, just go ahead and type a4l-bastian.
Scroll down and we're going to create a bastion instance using Amazon Linux.
So click on Amazon Linux.
In the dropdown below, go ahead and select the latest version of Amazon Linux.
Just make sure that it does say free tier eligible on the right of this dropdown.
Assuming that's all good, just below that make sure that in the architecture dropdown it's set to 64-bit x86.
Moving down further still, under instance type, just make sure that this is set to a free tier eligible instance.
It should default to T2.micro or T3.micro.
Depending on your region, either of these could be free tier eligible.
In my case it's T2.micro, but whatever your shows, just make sure that it's similar sized and says free tier eligible.
Now directly below that, under key pair, just click in this box.
You should at this point in the course have a key pair creator called a4l.
If you do, go ahead and select that key pair in the box.
If you don't, don't worry, you can just go ahead and click on create new key pair.
Enter a4l into the key pair name, select RSA, and then select PEM for the private key format and click on create key pair.
This will download the key pair to your local machine and then you can continue following along with this video.
So select that from the dropdown.
Directly below, under network settings click on edit.
This instance is going to go into the animals for live VPC.
So click on the VPC dropdown and select a4l-vpc1.
Directly below that, click in the subnet dropdown and we want to go ahead and look for sn-web-a.
So select the weba subnet.
This should change both of the dropdowns below.
So auto assign public IP and auto assign IPv6 IP to enable.
So just make sure that both of those are set to enable.
Directly below this, make sure that create security group is checked.
We're going to create a new security group.
Under security group name, just go ahead and enter a4l-bassian-sg and then put that same text in the description box directly below.
Now all of these defaults should be good.
Just make sure it's set to SSH, source anywhere.
Make sure that 0.0.0.0/0 and double colon 4/0 are both present directly below source.
Everything else looks good.
We can accept the rest of the defaults.
Just go ahead and click on launch instance.
Then click on instances at the top left of the screen.
At this point the instance is launching and we'll see that a4l-bassian is currently running.
We'll see the status check is showing initializing.
So we need to give this instance a few minutes to fully provision.
So go ahead and pause this video and we're going to resume it once this instance is ready to go.
And it has two out of two status checks.
So our instance is now showing two out of two status checks.
And that means everything's good and we're ready to connect.
Now if you select the instance, you'll see in the details pane below how it has a public IP version 4 address, a private IP version 4 address, public and private IP version 4 DNS.
And if we scroll down, you'll see lots of other information about this instance.
Now we're only concerned with the public IP version 4 address.
We're going to go ahead and connect to this instance this time using a local SSH client on our machine.
So right click and then select connect.
Now if we want to quickly connect into this instance, we can choose to use EC2 instance connect, which is a way to connect into the instance using a web console.
Now this does need an instance with a public IP version 4 address, but we have allocated a public address.
So if we wanted to, we can just make sure that the username is correct.
It should be EC2-user.
If we hit connect, it will open up a connection to this instance using a web console.
And this is often much easier to connect to EC2 instances if you don't have access to a local SSH client, or if you just want to quickly connect to perform some administration.
We can also connect with an SSH client.
If we select SSH client, it gives us the commands to run in order to connect to this EC2 instance.
So right at the bottom is an example connect command.
So SSH, we pick the key to use and then we pick the user at and then the public IP version 4 DNS.
So if we copy that into our clipboard and then move across to our terminal or command prompt, move into the folder where you downloaded the SSH keybearant to, in my case downloads, and paste in that command and press enter, that should connect us to the EC2 instance.
We'll have to verify the fingerprint, so we need to verify the authenticity of this host.
For this purpose, we can just go ahead and answer yes and press enter.
Now if it's the first time we're connecting using a particular key, and if you're running either macOS or Linux, you might be informed that the permissions on the key are too open.
In this case, the permissions are 0, 6, 4, 4, which are too open and we get this error.
Now it's possible to correct that if we move back to the AWS console.
It also gives us the command to correct these permissions.
So CHmod, space 400, space and then the name of the key.
So I'm going to copy that into my clipboard and move back to my terminal, paste that in and press enter, and that will correct those permissions.
Now if I get the connection command again, so copy that into my clipboard, and this time I'll paste it in and press enter and now I will be connected to this EC2 instance.
Now if you're doing this demonstration on Windows 10, you probably won't have to correct those permissions.
This is something specific to macOS or Linux.
So whenever you're connecting to EC2 instances which have a public IP version 4 address, you've always got the ability to use either EC2 instance connect or a local SSH client.
Now the third option which is session manager, this is a way that you can connect to instances even if they don't have public IP version 4 addressing.
And I'll be detailing this product fully later on in the course because there is some additional configuration that's required.
Now this bastion host, it's an EC2 instance and it does fall under the free tier.
So because it's a T2.micro or whatever type of instance you picked which falls under the free tier, you're not going to be billed for any usage of this instance in a given month.
Now as a general rule, as you're moving through the course, if you're ever intending to take a break, then you always have the option of deleting all of the infrastructure that you've created within a specific demo lesson.
So most of the more complex demo lessons that you'll have moving through the course, at the end of every demo lesson there will be a brief set of steps where I explain how to clean up the account and return it into the same state as it was at the start of the lesson.
But in certain situations I might tell you that one option is not to delete the infrastructure.
Whether you do delete it or not depends on whether you're intending to complete the next demo straight away or whether you're taking a break.
Now in this particular case I'm going to demonstrate exactly how you can clear up this infrastructure. [background noise] [background noise] In the next demo lesson you're going to be continuing using this structure, but I'm going to demonstrate how you can automate the creation using a CloudFormation template.
To clear up this infrastructure though, go ahead, right click on this bastion host and select terminate instance.
You'll need to click terminate to confirm and that will terminate and delete the instance.
You won't be charged for any further usage of that instance.
We need to wait for that instance to fully terminate, so pause the video and wait for it to move into a terminated state and then we can continue.
So that instance is terminated and now that that's done, we can click on services and move across to the VPC console and we're going to delete the entire Animals for Life VPC.
And don't worry, in the next demo lesson I'll explain how we can automate the creation.
So for now and in the course we're going to be using much more automation so that anything that you've done previously, we're going to automate the creation and focus your valuable time only on the things that you've just learned.
So click on your VPCs.
It should list two VPCs, the default one and the Animals for Life VPC.
Select the Animals for Life VPC, click on Actions and then delete the VPC.
Now this is going to delete all of the resources that are associated with this VPC.
So the internet gateway, the route tables and all of the subnets that you've created as part of the demo lessons to this point in the course.
So go ahead and type delete and then click delete to confirm that process and that will fully tidy up the account and return it into the same state as it was at the start of the VPC section.
Now with that being said, this is the end of this lesson.
You've successfully converted three subnets, so Web A, Web B and Web C to be public and you've done that by creating an internet gateway, associating that with the VPC, creating a route table, associating that with those subnets, adding two routes, pointing those routes at the internet gateway and then configuring those subnets to allocate a public IP version for address to any resources launched into those subnets.
So that's the same set of steps that you'll need to do to make any subnets public from an IP version for perspective in future.
So this is going to be the same tasks that you would use in larger production projects.
Although in production, you would probably automate it and don't just show you how to do that as you move through the course.
Now at this point, you've finished everything that you need to do in this demo lesson, so great job.
You've actually created something that is production ready and production useful.
Over the remainder of this section of the course, we're going to refine the design that we've got and add additional capabilities.
So in the upcoming lessons, I'll be talking about network address translation and how that can be used to give private EC2 instances access to the internet for things like software updates.
We'll be talking about the security of subnets using network access control lists known as knuckles and much, much more.
But you're doing a fantastic job so far.
This is not a trivial thing that you've implemented to this point.
So really great job.
But at this point, just go ahead and complete the video.
And then when you're ready, I look forward to you joining me in the next. [no audio]
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
This demo is going to bring together some really important theory and architecture that you've learned over the past few lessons.
What we're starting this demo lesson with is this architecture.
We have our VPC, the Animals for Life VPC in US East 1.
It uses the 10.16.0.0/16 side range.
It has 12 subnets created inside it, over three AZs with four tiers, Reserve, DB, Application and Web.
Now currently all the subnets are private and can't be used for communication with the internet or the AWS public zone.
In this demo we want to reconfigure the VPC to allow that.
So the first step is to create an internet gateway and attach it.
To do that, I'm going to move across to my desktop.
Now to do this in your environment, you'll need the VPC and subnet configuration as you set it up in the previous demo lesson.
So that configuration needs to be in place already.
You need to be logged in as the I am admin user of the management account of the organization and have the Northern Virginia region selected, so US - East - 1.
So go ahead and move across to the VPC console.
Now this should already be in the recently visited services because you were using this in the previous demo lesson, but if it's not visible just click in the services drop down, type VPC and then click to move to the VPC console.
Now if you do still have the configuration as it was at the end of the previous demo lesson, you should be able to click on subnets on the menu on the left and see a list of lots of subnets.
You'll see the ones for the default VPC without a name.
And if you have the correct configuration, you should see a collection of 12 subnets, 3 application subnets, 3 database subnets, 3 reserved subnets and then 3 web subnets.
So all of these should be in place within the Animals for Life VPC in order to do the tasks within this demo lesson.
So I'm going to assume from this point onwards that you do have all of these subnets created and configured.
Now what we're going to be doing in this demo lesson is configuring the 3 web subnets, so web A, web B and web C, to be public subnets.
Being a public subnet means that you can launch resources into the subnet, have them allocated with a public IP version 4 address and have connectivity to and from the public IP version 4 internet.
And in order to enable that functionality, there are a number of steps that we need to perform and I want you to get the practical experience of implementing these within your own environment.
Now the first step to making subnets public is that we need an internet gateway attached to this VPC.
So internet gateways, as I talked about in the previous theory lesson, are highly available gateway objects which can be used to allow public routing to and from the internet.
So we need to create one.
So let's click on internet gateways on the menu on the left.
They'll already be an internet gateway in place for the default VPC.
Remember when you created the default VPC, all this networking infrastructure is created and configured on your behalf.
But because we created a custom VPC for animals for life, we need to do this manually.
So to do that, go ahead and click on create internet gateway.
We're going to call the internet gateway A4L, so animals for life, VPC 1, which is the VPC we're going to attach it to and then IGW for internet gateway.
So A4L-VPC1-IGW.
Now that's the only information that we need to enter, so scroll down and click on create internet gateway.
Internet gateways are initially not attached to a VPC and we can tell that because it's initially in.
We need to attach this to the animals for life VPC.
So click on actions and then attach to VPC inside the available VPCs box.
Just click and then select A4L-IVAN-VPC1.
Once selected, go ahead and click on attach internet gateway and that will attach our brand new internet gateway to the animals for life VPC.
And that means that it's now available within that VPC as a gateway object, which gives the VPC the capability to communicate to and receive communications from the public internet and the AWS public zone.
Now the next step is that we want to make all the subnets in the web tier public, so the services deployed into these subnets can take advantage of this functionality.
So we want the web subnets to be able to communicate to and receive communications from the public internet and AWS public services.
Now there are a number of steps that we need to do to accomplish this.
We need to create a route table for the public subnets.
We need to associate this route table with the three public subnets, so web A, web B and web C and then we need to add two routes to this route table.
One route will be a default route for IP version 4 traffic and the other will be a default route for IP version 6 traffic.
And both of these routes for their target will be pointing at the internet gateway that you've just created and attached to this VPC.
Now this will configure the VPC router to forward any data intended for anything not within our VPC to the internet gateway.
Finally, on each of these web subnets will be configuring the subnet to auto assign a public IP version 4 address and that will complete the process of making them public.
So let's perform all of these sets of configuration.
So now that we're back at the AWS console, we need to create a route table.
So go ahead and click on route tables on the menu on the left and then we're going to create a new route table.
First we'll select the VPC that this route table will belong to and it's going to be the animals for life.
If a VPC, so go ahead and select that VPC and then we're going to give this route table a name.
And I like to keep the naming scheme consistent, so we're going to use A4, L4, animals for life and then a hyphen.
VPC1 because this is the VPC the route table will belong to and then hyphen RT for route table and then hyphen and then web because this route table is going to be used for the web subnets.
So go ahead and create this route table and click route tables on the menu on the left.
If we select the route table that we've just created, so that's the one that's called A4, L hyphen VPC1 hyphen RT hyphen web and then just expand this overview area at the bottom.
We'll be able to see all the information about this route table.
Now there are a number of areas of this which are important to understand.
One is the routes area which lists all the routes on this route table and the other is subnet associations.
This determines which subnets this route table is.
So let's go to subnet associations and currently we can see that it's not actually associated with any subnets within this VPC.
We need to adjust that so go ahead and edit those associations and we're going to associate it with the three web subnets.
So you need to select web A, web B and web C.
Now notice how all those are currently associated with the main route table of the VPC.
Remember a subnet can only be associated with one route table at a time.
If you don't explicitly associate a route table with a subnet then it's associated with the main route table.
We're going to change that.
We're going to explicitly associate this new route table with the web A, web B and web C subnets.
So go ahead and say that.
So now this route table has been associated with web A, web B and web C.
Those subnets are no longer associated with the main route table of the VPC.
So now we've configured the association as a route to routes.
And we can see that this route table has two local routes.
We've got the IP version 4 side of the VPC and the app.
IP version 6 side of the VPC.
So these two routes on this route table will mean that web A, web B and web C will know how to direct traffic towards any other IP version 4 or IP version 6 addresses within this VPC.
Now these local routes can never be adjusted or removed, but what we can do is add additional routes.
So we're going to add two routes, a default route for IP version 4 and a default route for IP version 6.
So we'll do that.
We'll start with IP version 4.
So we'll edit those routes and then we'll add a route.
The format for the IP version 4 default route is 0.0.0.0/0.
And this means any IP addresses.
Now I've talked elsewhere in the course how there is a priority to routing.
Within a VPC there's a more specific route always takes priority.
So this route, the /16 is more specific than this default route.
So this default route will only affect IP version 4 traffic, which is not matched by this local route.
So essentially anything which is IP version 4, which is not destined for the VPC, will use this default route.
Now we need to pick the internet gateway as the target for this route.
So click in the target box on this row, select internet gateway.
There should only be one that's highlighted.
That's the Animals for Life internet gateway you created moments ago.
So select that and that means that any IP version 4 traffic which is not destined for the VPC side of it. [siren] Range will be sent to the Internet Gateway.
Now we're going to do the same for IPv6.
So go ahead and add another route.
And the format for IPv60 default routes is double colon, forward slash zero.
And this is the same architecture. [siren] It essentially means this matches all IPv6 addresses, but it's less specific than the IPv6 and version 6 local route on this top row.
So this will only be used for any IPv6 addresses which are not in the IPv6 VPC side range.
So go ahead and select Target, go to Internet Gateway, and select the Animals for Life Internet Gateway.
And once you've done both of those, go ahead and click on Save Changes.
Now this means that we now have two default routes, an IPv4 default route, and an IPv6 default route.
So this means that anything which is associated with these route tables will now send any unknown traffic towards the Internet Gateway.
But what we need to do before this works is we need to ensure that any resources launched into the Web A, Web B, or Web C subnets are allocated with public IPv4 addresses.
To do that, go ahead and click on Subnets.
In the list, we need to locate Web A, Web B, and Web C.
So we'll start with Web A, so select Web A, click on Actions, and then Edit Subnet Settings.
And this time, we're going to modify this subnet so that it automatically assigns a public IPv4 address.
So check this box into the Save, and that means that any resources launched into the Web A subnet will be allocated with a public IPv4 address.
Now we need to follow the same process for the other web subnets, so select the Web B subnet, click on Actions, and then Edit Subnet Settings.
Enable IPv4, click on Save, and then do that same process for Web C.
So locate Web C, click on Actions, and then Edit Subnet Settings.
And then enable public IPv4 addresses and click on Save.
So that's all the network configuration done.
We've created an Internet Gateway.
We've associated the Internet Gateway with the Animals for IPPC.
We've created a Routetable for the web subnets.
We've associated this Routetable with the web subnets.
We've added default routes onto this Routetable, pointing at the Internet Gateway as a default IPv4 and IPv6 route.
And then we've enabled the allocation of public IPv4 addresses for Web A, Web B, and Web C.
Okay, so this is the end of Part 1 of this lesson.
It was getting a little bit on the long side, and so I wanted to add a break.
It's an opportunity just to take a rest or grab a coffee.
Part 2 will be continuing immediately from the end of Part 1.
To go ahead, complete the video, I'm ready, join me in part two.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
And in this lesson, I want to talk about how routing works within a VPC and introduce the internet gateway, which is how we configure a VPC so that data can exit to and enter from the AWS public zone and public internet.
Now, this lesson will be theory where I'm going to introduce routing and the internet gateway to the architecture behind both those things, as well as jump boxes also known as Bastion hosts.
In the demo lesson, which immediately follows this one, you'll get the opportunity to implement an internet gateway yourself and the animals for life at VPC and fully configure the VPC with public subnet that allow you to connect to that jump box.
So let's get started.
We've got a lot to cover.
A VPC router is a highly available device which is present in every VPC, both default or custom, which moves traffic from somewhere to somewhere else.
It runs in all the availability zones that the VPC uses and never needs a way about its availability.
It simply works.
The router can be networked in every subnet, and the network is just one address of the subnet.
By default in a custom VPC, without any other configuration, the VPC router will simply route traffic between subnets in that VPC.
If an EC2 instance in one subnet wants to communicate with something in another subnet, the VPC router is the thing that moves the traffic between subnets.
Now, the VPC router is controllable.
You create route tables which influence what's to do with traffic when it leaves a subnet.
So just to be clear, the route table that's associated with a subnet defines what the VPC router will do when data leaves that subnet.
A VPC is created with what's known as a main route table.
If you don't explicitly associate a custom route table with a subnet, it uses the main route table of the VPC.
If you do associate a route table that you create with a subnet, then when you associate that, the main route table is disassociated.
A subnet can only have one route table associated with it at any one time, but a route table can be associated with many subnets.
A route table looks like this in the user interface.
In this case, this is the main route table for this specific VPC.
And a route table is just a list of routes.
This is one of those routes.
When traffic leaves the subnet that this route table is associated with, the VPC router reviews the IP packets.
And remember, I said that a packet had a source address and a destination address, as well as some data.
The VPC router looks at the destination address of all packets leaving the subnet.
And when it has that address, it looks at the route table and it identifies all the routes which match that destination address.
And it does that by checking the destination field at the route.
This destination field determines what destination the route matches.
Now, the destination field on a route could match exactly one specific IP address.
It could be an IP with a /32 prefix.
And remember, that means that it matches one single IP.
But the destination field on a route could also be a network match.
So matching an entire network of which that IP is part.
Or it could be a default route.
Remember, for IP version 4, I mentioned that 0.0.0.0.0/0 matches all IP version 4 IP addresses.
That's known as a default route, a catchall.
In the case where traffic leaving a subnet only matches one route, then that one route is selected.
If multiple routes match, so maybe there's a specific /32 IP match, maybe there's a /16 network match, and maybe there's a 0.0.0.0/0 default match, well then the prefix is used as a priority.
The higher the prefix value, the more specific the route is and the higher priority that that route has.
So the higher the prefix, all the way up to the highest priority of /32, that is used to select which route applies when traffic leaves a subnet.
Once we get to the point where a single rule in a route table is selected, either the sole route that applies or the one with the highest priority, then the VPC router forwards that traffic through to its destination, which is determined by the target field on the route.
And the target field will either point at an AWS gateway, or it will say, as with this example, local.
And local means that the destination is in the VPC itself, so the VPC router can forward the traffic directly.
All route tables have at least one route, the local route.
This matches the VPC side range, and it lets the VPC router know that traffic destined for any IP address in the VPC side range is local and it can be delivered directly.
If the VPC is also IPv6 enabled, people also have another local route matching the IPv6 side for the VPC.
As is the case with this example, that bottom route, beginning 2600, that is an IPv6 local route.
That's the IPv6 side of this specific VPC.
Now, these local routes can never be updated.
They're always present, and the local routes always take priority.
They're the exception to that previous rule about the more specific the route is, the higher the priority.
Local routes always take priority.
For the exam, remember the route tables are attached to zero or more subnets.
A subnet has to have a route table.
It's either the main route table of the VPC or a custom one that you've created.
A route table controls what happens to data as it leaves the subnet or subnets that that route table is associated with.
Local routes are always there, uneditable, and match the VPC IPv4 or VPC side range.
For anything else, higher, prefix values are more specific than they take priority.
The way the route works is it matches a destination IP, and for that route, it directs traffic towards a specific target.
Now, a default route, which I'll talk about shortly, is what happens if nothing else matches.
Now, an internet gateway is one of the most important add-on features available within a VPC.
It's a regionally resilient gateway which can be attached to a VPC.
I've highlighted the words "region" and "resilience" because it always comes up in the exam.
You do not need a gateway per availability zone.
The internet gateway's resilient by design.
One internet gateway will cover all of the availability zones in the region which the VPC is using.
Now, there's a one-to-one relationship between internet gateways and the VPC.
A VPC can have no internet gateways which makes it entirely private, or it can have one internet gateway.
Those are the two choices.
An internet gateway can be created and not attached to a VPC, so it can have zero attachments, but it can only ever be attached to one VPC at a time, at which point it's valid in all of the availability zones that the VPC uses.
Now, the internet gateway runs from the border of the VPC and the AWS public zone.
It's what allows services inside the VPC, which are allocated with public IP version 4 addresses or IP version 6 addresses, to be reached from the internet and to connect to the AWS public zone or the internet.
Of course, the AWS public zone is used if you're accessing S3, SQS, SNS, or any other AWS public services from the VPC.
Now, it's a managed gateway and so AWS handles the performance.
From your perspective as an architect, it simply works.
Now, using an internet gateway within a VPC, it's not all that complex.
It's a simplified VPC diagram.
First, we create and attach an internet gateway to a VPC.
This means that it's available for use inside the VPC.
We can use it as a target within route tables.
So then we create a custom route table and it's within route tables.
So then we create a custom route table and we associate it with a web subnet.
Then we add IP version 4 and optionally IP version 6 default routes to the route table with the target being the internet gateway.
Then finally, we configure the subnet to allocate IP version 4 addresses and optionally IP version 6 by default.
And at that point, once we've done all of those actions together, the subnet is classified as being a public subnet and any services inside that subnet with public IP addresses can communicate to the internet and vice versa and they can communicate with the AWS public zone as long as there's no other security limitations that are in play.
Now, don't worry if this seems complex.
You'll get to experience it shortly in the upcoming demo lesson.
But before that, I want to talk about how IP version 4 addressing actually works inside the VPC because I've seen quite a few difficult questions on the exam based around IP version 4 addressing and I want to clarify exactly how it works.
So conceptually, this is how an EC2 instance might look if it's using IP version 4 to communicate with a software update server of some kind.
So we've got the instance on the left with an internet gateway in between and let's say it's a Linux EC2 instance trying to do some software updates to a Linux update server that's located somewhere in the public internet zone.
So the instance has a private IP address of let's say 10.16.16.20 and it also has an IP version 4 public address that's assigned to it of 43.250.192.20.
Only that's not how it really works.
This is another one of those little details which I try to include in my training courses because it really comes invaluable for the exam.
What actually happens with public IP version 4 addresses is that they never touch the actual services inside the VPC.
Instead, when you allocate a public IP version 4 address, for example, to this EC2 instance, a record is created which the internet gateway maintains.
It links the instance's private IP to its allocated public IP.
So the instance itself is not configured with that public IP.
That's why when you make an EC2 instance and allocate it a public IP version 4 address, inside the operating system, it only sees the private IP address.
Keep this in mind for the exam that there are questions which will try to trip you up on this one.
For IP version 4, it is not configured in the OS with the public IP address.
So let's look at the flow of data.
How does this work?
Well, when the Linux instance wants to communicate with the Linux software update server, it creates a packet of data.
Now obviously it probably creates a lot of packets.
Well, let's focus on one for now because it keeps the diagram nice and simple.
The packet has a source address of the EC2 instance and a destination address of the Linux software update server.
So at this point, the packet is not configured with any public addressing.
This packet would not be routable across the public internet.
It could not reach the Linux update server.
That's really important to realize.
Now the packet leaves the instance and because we've configured a default route, it arrives at the internet gateway.
The internet gateway sees that this packet is from the EC2 instance because it analyzes the source IP address of that packet.
And it knows that this instance has an associated public IP version 4 address.
And so it adjusts the packet.
It changes the packet's source IP address to the public IP address that's allocated to that instance.
And this IP address, because it's public, is routable across the internet.
So the internet gateway then forwards the updated packet onto its destination.
So as far as the Linux software update server is concerned, it's receiving a packet from a source IP address of 43.250.192.20.
It knows nothing of the private IP address of the EC2 instance.
Now on the way back, the inverse happens.
The Linux software update server wants to send a packet back to our EC2 instance.
But as far as it's concerned, it doesn't know about the private address.
It just knows about the public address.
So the software update server sends a packet back, addressed to the instance's public IP address with its source address.
So it thinks that the real IP address of the instance is this 43.250.192.20 address.
Now this IP address actually belongs to the internet gateway.
And so it travels over the public internet and it arrives at the internet gateway, which then modifies this packet.
It changes it.
It changes the destination address of this packet from the 43 address to the original IP address of the EC2 instance.
And it does this because it's got a record of the relationship between the private IP and the allocated public IP.
So it just changes the destination to the private IP address of the instance.
And then it forwards this packet through the VPC network to the original EC2 instance.
So the reason I wanted to highlight this is because at no point is the operating system on the EC2 instance aware of its public IP.
It just has a private IP.
Don't fall for any exam questions which try to convince you to assign the public IP version 4 address of an EC2 instance directly to the operating system.
It has no knowledge of this public address.
Configuring an EC2 instance appropriately using IP version 4 means putting the private IP address only.
The public address never touches the instance.
For IP version 6, all addresses that AWS users are natively, publicly, routable.
And so in the case of IP version 6, the operating system does have the IP address version 6 address configured upon it.
That's the publicly routable address.
And all the internet gateway does is pass traffic from an instance to an internet server.
And then back again, it doesn't do any translation.
Now, before we implement this in the demo lesson, I just want to briefly touch upon bastion hosts and jump boxes.
At a high level, bastion hosts and jump boxes are one and the same.
Essentially, it's just an instance in a public subnet inside of VPC.
And architecturally, they're used to allow incoming management connections.
So all incoming management connections arrive at bastion hosts or jump box.
And then once connected, you can then go on to access internal only VPC resources.
So bastion hosts and jump boxes are generally used either as a management point or as an entry point for private only VPCs.
So if your VPC is a highly secure private VPC, you'll generally have a bastion host or a jump box being the only way to get access to that VPC.
So it's essentially just an inbound management point.
And you can configure these bastion hosts or jump boxes to only accept connections from certain IP addresses, to authenticate with SSH, or to integrate with your on-premises identity servers.
You can configure them exactly how you need, but at a real core architectural level, they are generally the only entry point to a highly secure VPC.
And historically, they were the only way to manage private VPC instances.
Now, there are alternative ways to do that now, but you will still find bastion hosts and jump boxes do feature on the exam.
OK, so that's all the theory that I wanted to cover in this lesson.
It's now time for a demo.
In the next lesson, we're going to implement the Internet gateway in the Animals for Life VPC.
We'll create it, we'll attach it to the VPC, we'll create a custom route table for the web subnets, we'll create two routes in that route table, one for IP version 4 and one for IP version 6.
And both of these will point at the Internet gateway as a target.
We'll associate that route table with the web tier subnets, configure those subnets to allocate public IP version 4 addresses, and then launch a bastion host into one of those web subnets.
And if all goes well, we will be able to connect to that instance using our SSH application.
So I think this demo lesson is going to be really interesting and really exciting.
It's the first time that we're going to be stepping through something together that we could qualify as production like.
Something that you could implement and would implement in a production ready VPC.
So go ahead, complete this video, and when you're ready, join me in the demo lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to create all of the subnets within the custom VPC for animals for life.
So we're going to create the subnets as shown on screen now.
We're going to be creating four subnets in each availability zone.
So that's the web subnet, the application subnet, the database subnet and the reserved subnet.
And we're going to create each of those four in availability zone A, B and C.
Now before we get started, attached to this lesson is a link to this document.
Now this is a list of all the details of the subnet you're going to create in this demo lesson.
So we've got a reserved subnet, a database subnet, an app subnet and a web subnet.
And we've got each of those in availability zone A, B and C.
Now in terms of what this document contains, we have a subnet name, then we have the IP range that subnet will use, the availability zone that subnet is within and then this last value and we'll talk about what this is very shortly.
This relates to IP version 6.
Now you'll notice that for each subnet this is a unique value.
You're going to need this to configure the IP version 6 configuration for each of the subnets.
Now let's get started.
So let's move to the AWS console and you need to be within the VPC console.
So if you're not already there, go ahead and type that in the search box at the top and then click to move to the VPC console and then click on subnets.
And once you've done that, go ahead and click on create subnet.
Now the newest version of the user interface allows you to create multiple subnets at a time and so we're going to create all four of the subnets in each availability zone.
So we'll do this three times, one for availability zone A, one for B and one for C.
So we'll get started with availability zone A and first we need to select the VPC to use.
So click in the VPC ID drop down and select the animals for live VPC.
Once you've done that, we're going to start with subnet one of one.
So let's move to the subnet's document.
So it's these subnets that we're going to create and we'll start with the reserved subnet.
So copy the name of the subnet into your clipboard and paste it into subnet name.
Then change the availability zone to AZA and then make sure IPv4 is set to manual and move back to the subnet's document and copy the IP range that we're going to use and paste that into this box.
Then scroll down again and make sure manual input is selected for IPv6.
Then click the drop down and select the IPv6 range for the VPC.
Now the VPC uses a /56 IPv6 range.
Because we need our subnet to fit inside this, we're going to make the individual subnet ranges much smaller.
So what I'll need you to do and you'll need to do this each time is to click on the down arrow and you'll need to click on the down arrow twice.
The first time will change it to a /60 and the second time to a /64.
Now note in my case how I have 9, 6, 0, 0 and if I click on this right arrow, it increments this value by one each time.
Now this value corresponds to the value in the subnet's document, so in this case 0, 0.
By changing this value each time, it means that you're giving a unique IPv6 range to each subnet.
So in this case, start off by leaving this set to 0, 0.
And once you've done that, you can click on add new subnet.
And we're going to create the next subnet.
So in this case, it's sn-db-a, so enter that name, change the availability zone to A, manual input for IPv4, copy the IP range for DBA into your clipboard, paste that in, manual for IPv6, change the VPC range in the drop down, click the down arrow twice to set this to /64, and then change this value to 0, 1.
And again, this matches the IPv6 value in the subnet's document.
Then we'll do the same process for the third subnet, so we're going to add a new subnet.
This time the name is sn-app-a, enter that, availability zone A, manual for IPv4, and then paste in the IP range, manual for IPv6, and select the VPC range, and then change the subnet block to /64 by clicking the down arrow twice.
And then click the right arrow twice to set 0, 2 as the unique value for the subnet.
And again, this matches the value in the subnet's document.
Then lastly, we're going to do the same thing for the last subnet in availability zone A, so we're going to add a new subnet.
This time it's web A, so copy and paste that into the subnet name box, set availability zone A, manual for IPv4, copy and paste the range from the subnet's document, manual for IPv6, select the IPv6 range from the VPC, click the down arrow twice to set the appropriate size for the subnet IPv6 range, and then click on the right arrow to change this value to 0, 3.
Now that's all the subnet's created, all four of them in availability zone A, so we can scroll all the way down to the bottom, and click on create subnet, and that's going to create all four of those subnets.
We can see those in this list.
Now we're going to follow that same process for availability zone B, so click on create subnet, change the VPC to the animals for IPvc, and now we're going to start moving through quicker.
So first we're going to do the reserve B subnet, so copy the name, paste that in, set the availability zone this time to B, manual for IPv4, paste in the range, manual for IPv6, select the VPC range in the drop-down, click on the down arrow twice to set the /64, and then click on the right arrow and set to 0, 4 as the unique value.
Scroll down, click add new subnet, next is DBB, enter that, availability zone B, manual for IPv4, enter the appropriate range, and just take note of the IPv6 value because then we don't have to keep switching backwards and forwards to this document in this case it's 0, 5, paste in the IPv4 range, manual for IPv6, select the VPC range in the drop-down, down arrow twice, and then the left arrow and set to 0, 5, which is the unique value for the subnet, click add new subnet, this time it's app B, enter that name, availability zone B, manual for IPv4, you'll need to enter the IP range for app B, and again pay attention to the fact that the unique value for the subnet is 0, 6, manual for IPv6, select the VPC range in the drop-down, down arrow twice, right arrow until it says 0, 6, and then add new subnet again, we're going to do the last one, this time it's WebB, enter that name, availability zone B, and drop-down, manual for IPv4, copy and paste the IP range, and pay attention to 0, 7, which is the IPv6 unique value, enter the IPv4 subnet range in this box, manual for IPv6, select the VPC range in the drop-down, down arrow twice, and then right arrow until it says 0, 7, and now we've got all four subnets in AZB, so click on create subnet, and then we're going to do this one last time for availability zone C, so click on create subnet, select the animals for IPPC in the drop-down, and we're going to follow the same process, so for subnet 1 it will be SN_reserved-C, availability zone C, you'll need to enter the IPv4 range, pay attention to the IPv6 unique value, which is 0, 8, paste in that range in the box, manual for IPv6, select the VPC range, down arrow twice, set it to /64, and then right arrow until it says 0, 8, scroll down, add a new subnet, next is DVC, so enter that, availability zone C, and do the same thing as before, we'll need the IP range, and the IPv6 unique value, so 0, 9, enter that, manual for IPv6, select the VPC range in the drop-down, down arrow twice, and then click the right arrow until 0, 9 is selected, add a new subnet, then the application subnet, copy that, paste it in, availability zone C, get the IPv4 range and the unique value of IPv6, now note this is hexadecimal, so 0, 8 directly follows 0, 9, so pay attention to that, go back, paste in the IPv4 range, manual for IPv6, select the VPC range, down arrow twice to select /64, and then right arrow until you get 0, 8, then one last time, click on add new subnet, go back to the subnet document, Web C, availability zone C, get the IPv4 range and note the unique IPv6 value, paste that in, select the IPv6 range for the VPC, down arrow twice to select /64, and then right arrow all the way through to 0, B, at that point you can go ahead and click on create subnet, and that's created all four subnets in availability zone C, and all of the subnets now that are within the annuals for IPv6, at least those in AZA, AZB, and AZC, once again we're not going to create the ones in AZD which are reserved for future growth, now there's one final thing that we need to do to all of these subnets, so each of these subnets is allocated with an IPv6 range, however it's not set to auto-allocate IPv6 addressing to anything created within each of these subnets, now to do that go ahead and select SN-AP-A, click on actions, and then edit subnet settings, and I want you to check the box to say enable auto-assign IPv6 addresses, once you've done that scroll to the bottom and click on save, so that's one subnet that you've done that for, next I want you to do it for app B, follow the same process, actions, edit subnet settings, auto-assign IPv6, click on save, notice how we're not touching the IPv4 setting, we'll be changing that as appropriate later, select SN-AP-AP-A and see and again edit subnet settings, enable auto-assign IPv6, and click on save, then we're going to do the same for the database subnets, so dba, edit subnet settings, enable IPv6 and save, then dbb, check this box, save, then dbc, edit subnet settings, check this box, save, now we'll do the reserve, so reserve A, then reserve B, and then reserve C, and then finally we're going to do the web subnets, so we'll start with A, again make sure you're only changing the IPv6 box, say that, do the same with web B, and then once that's done we'll scroll down and do the final subnet, so web C, same process, IPv6, and save, and at this point you've gone through the very manual process of creating 12 subnets across three availability zones using the architecture that's shown on screen now, now in production usage in the real world you would automate this process, you wouldn't do this manually each and every time, but I think it's important that you understand how to do this process manually, so you can understand exactly what to select when configuring automation to achieve the same end goal, so whenever I'm using automation I always like to understand how it works manually, so that I can fully understand what it is that automation is doing, now at this point that is everything that I wanted to cover in this demo lesson, we're going to be continually evolving this design as we move through this section of the course, but at this point that is everything I wanted to do, so go ahead and complete this video and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
And in this lesson, I want to continue the theme of VPC networking in AWS by covering VPC subnets.
Now subnets are what services run from inside VPCs and they're how you add structure, functionality and resilience to VPCs.
So they're an important thing to get right, both for production deployment and to do well in the exam.
So let's not waste time.
Let's jump in and get started.
In this lesson, we'll be starting off with this architecture.
This is exactly how we left off at the end of the previous lesson, a framework VPC, a skeleton.
What we'll be doing is creating an internal structure using subnets.
We'll be turning this into this.
Now if you compare this diagram to the one that I've linked previously, you might notice that the web tier subnets on the right are blue on this diagram instead of green on the diagram that I previously linked.
Now with AWS diagrams, blue means private subnets and green means public subnets.
Subnets inside a VPC start off entirely private and they take some configuration to make them public.
So at this point, the subnets which will be created on the right, so the web tier, they'll be created as private subnets.
And in the following lessons, we'll change that together.
So for now, this diagram showing them as private subnets is correct.
So what exactly is a subnet?
It's an easy, resilient feature of a VPC, a subnetwork of the VPC, a part of the VPC that's inside a specific availability zone.
It's created within one availability zone and it can never be changed because it runs inside of an availability zone.
If that availability zone fails, then the subnet itself fails and so do any services that are only hosted in that one subnet.
And as AWS Solutions Architects, when we design highly available architectures, we're trying to put different components of our system into different availability zones to make sure that if one fails, our entire system doesn't fail.
And the way that we do that is to put these components of our infrastructure into different subnets, each of which are located in a specific availability zone.
The relationship between subnets and availability zones is that one subnet is created in a specific availability zone in that region.
It can never be changed and a subnet can never be in multiple availability zones.
That's the important one to remember for the exam.
One subnet is in one availability zone.
A subnet can never be in more than one availability zone.
Logically, though, one availability zone can have zero or lots of subnets.
So one subnet is in one availability zone, but one availability zone can have many subnets.
Now, the subnet by default uses IP version 4 networking and it's allocated an IP version 4 sider.
And this sider is a subset of the VPC side block.
It has to be within the range that's allocated to the VPC.
What's more, the side of the subnet users cannot overlap with any other subnets in that VPC.
They have to be non-overlapping.
That's another topic which tends to come up all the time in the exam.
Now, a subnet can optionally be allocated an IP version 6 side block as long as the VPC also is enabled for IP version 6.
The range that's allocated to individual subnets is a /64 range.
And that /64 range is a subset of that /56 VPC.
So /56 IP version 6 range has enough space for 256 /64 ranges that each subnet can use.
Now, subnets inside the VPC can, by default, communicate with other subnets in that same VPC.
The isolation of the VPC is at the perimeter of the VPC.
Internally, there is free communication between subnets by default.
Now, we spoke in previous lessons about sizing.
So sizes of networks are based on the prefix.
For example, a /24 network allows values from 0 to 255 in the fourth octet.
Well, that's a possible 256 possible IPs.
But inside a subnet, you don't get to use them all.
Some IPs inside every VPC subnet are reserved.
So let's look at those next.
There are five IP addresses within every VPC subnet that you can't use.
So whatever the size of the subnet, the usable IPs are five less than you would expect.
Let's assume, for example, that the subnet we're talking about is 10.16.16.0/20.
So this has a range of 10.16.16.0 to 10.16.31.255.
The first address which is unusable is the network address.
The first address of any subnet is the one that represents the network, the starting address of the network, and it can't be used.
This isn't specific to the US.
It's a case for any other ID networks as well.
Nothing uses the first address on a network.
Next is what's known as the network plus one address.
The first IP after the network address.
And in AWS, this is used by the VPC router, the logical network device which moves data between subnets, and in and out of the VPC if it's configured to allow that.
The VPC router has a network interface in every subnet, and it uses this network plus one address.
Next is another AWS specific IP address which can't be used, called the network plus two address.
In a VPC, the second usable address of the VPC range is used for DNS.
In AWS, reserve the network plus two address in every subnet.
So I've put DNS and an asterisk here because I refer to this reservation as the DNS reservation.
But strictly speaking, it's the second address in a VPC which is used for DNS.
So that's the VPC range plus two.
But AWS do reserve the network plus two address in every single subnet.
So you need to be aware of that.
And there's one more AWS specific address that you can't use, and you guessed it, it's a network plus three address.
This doesn't have a use yet.
It's reserved for future requirements.
And this is the network plus three, and this is ample ten, sixteen, sixteen, sixteen, three.
And then lastly, the final IP address that can't be used in every VPC subnet is the network broadcast address.
Broadcast are not supported inside a VPC, but the last IP address in every subnet is reserved regardless.
So you cannot use this last address.
So this makes a total of five IP addresses in every subnet that you can't use, three AWS specific ones, and then the network and broadcast addresses.
So if the subnet should have 16 IDs, it actually has 11 usable IDs.
So keep this in mind, especially when you're creating smaller VPCs and subnets, because this can quickly eat up IP addresses, especially if you use small VPCs with lots of subnets.
Now a VPC has a configuration object applied to it called a DHCP option set.
DHCP stands for Dynamic Code Configuration Protocol.
It's how computing devices receive IP addresses automatically.
Now there's one DHCP option set applied to a VPC at one time, and this configuration flows through to subnets.
It controls things like DNS servers, NTP servers, NetBioh servers, and a few other things.
If you've ever managed a DHCP server, this will be familiar.
So for every VPC, there's a DHCP option set that's linked to it, and that can be changed.
You can create option sets, but you cannot edit them.
So keep in mind, if you want to change the settings, you need to create a new one, and then change the VPC allocation to this new one.
On every subnet, you can also define two important IP allocation options.
The first option controls if resources in a subnet are allocated a public IP version 4 address, in addition to their private subnet address automatically.
Now I'm going to be covering this in a lesson on routing and internet gateway, because there's some additional theory that you need to understand about public IP version 4 addresses.
But this is one of the steps that you need to do to make a subnet public.
So it's on a per subnet basis that you can set auto assign public IP version 4 addresses.
Another related option defined at a subnet level is whether resources deployed into that subnet are also given an IP version 6 address, and logically for that to work, the subnet has to have an allocation as does the VPC.
But both of these options are defined at a subnet level and flow onto any resources inside that subnet.
Okay, so now it's time for a demo.
That's all the theory that I wanted to cover in this VPC subnet lesson.
So in the demo lesson, we're going to implement the structure inside VPC together.
We're essentially going to change this skeleton VPC into a multi-tier VPC that's configured with all of these subnets.
Now it's going to be a fairly detailed demo lesson.
You can have to create all of these 12 subnets manually one by one.
And out of all the lessons, the detail really matters on this one.
We need to make sure that you configure this exactly as required so you don't have any issues in future.
Now if you do make any mistakes, I'm going to make sure that I supply a CloudFormation template with the next lesson that allows you to configure this in future automatically.
But the first time that you do this lesson, I do want you to do it manually because you need to get used to the process of creating subnets.
So controlling what the IP ranges are, being able to select which availability zone they go in, and knowing how to assign IP version 6 ranges to those subnets.
So it is worthwhile investing the time to create each of these 12 subnets manually.
And that's what we're going to do in the next demo lesson.
But at this point, go ahead and complete this lesson, and then when you've got the time, I'll see you in the next demo lesson where we'll complete the configuration of this VPC by creating the subnets.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
Over the remaining lessons in this section, you're going to learn how to build a complex, multi-tier, custom VPC step by step.
One of the benefits of the VPC product is that you can start off simple and layer components in piece by piece.
This lesson will focus on just the VPC shell, but by the end of this section, you'll be 100% comfortable building a pretty complex private network inside AWS.
So let's get started.
Now, don't get scared off by this diagram, but this is what we're going to implement together in this section, of course.
Right now, it might look complicated, but it's like building a Lego project.
We'll start off simple and add more and more complexity as we go through the section.
This is a multi-tier, custom VPC.
If you look at the IP plan document that I linked in the last lesson, it's using the IP address at the first range of the US Region 1 for the general account, so 10.16.0.0/16.
So the VPC will be configured to use that range.
Inside the VPC, there'll be space for four tiers running in four availability zones for a total of 16 possible subnets.
Now, we'll be creating all four tiers, so reserved, database, app, and web, but only three availability zones, A, B, and C.
We won't be creating any subnets in the capacity reserved for the future availability zone, so that's the part at the bottom here.
In addition to the VPC that we'll create in this lesson, the subnets that we'll create in the following lessons will also, as we look through the section of the course, be creating an internet gateway which will give resources in the VPC public access.
We'll be creating NAT gateways which will give private instances outgoing only access, and we'll be creating a bastion host which is one way that we can connect into the VPC.
Now, using bastion hosts is frowned upon and isn't best security practice for getting access to AWS VPCs, but it's important that you understand how not to do something in order to appreciate good architectural design.
So I'm going to step you through how to implement a bastion host in this part of the course, and as we move through later sections of the course, you'll learn more secure alternatives.
Finally, later on in the section, we'll also be looking at network access control lists on knuckles which can be used to secure the VPC, as well as data transfer costs for any data that moves in and around the VPC.
Now, this might look intimidating, but don't worry, I'll be explaining everything every step of the way.
To start with though, we're going to keep it simple and just create the VPC.
Before we do create a VPC, I want to cover some essential architectural theory, so let's get started with that.
VPCs are a regionally isolated and regionally resilient service.
A VPC is created in a region and it operates from all of the AZs in that region.
It allows you to create isolated networks inside AWS, so even in a single region in an account, you can have multiple isolated networks.
Nothing is allowed in or out of a VPC without a piece of explicit configuration.
It's a network boundary and it provides an isolated glass radius.
What I mean by this is if you have a problem inside a VPC, so if one resource or a set of resources are exploited, the impact is limited to that VPC or anything that you have connected to it.
I talked earlier in the course about the default VPC being set up by AWS using the same static structure of one subnet per availability zone using the same IP address ranges and requiring no configuration from the account administrator.
Well, custom VPCs are pretty much the opposite of that.
They let you create networks with almost any configuration, which can range from a simple VPC to a complex multi-tier one such as the one that we're creating in this section.
Custom VPCs also support hybrid networking, which let you connect your VPC to the cloud platforms as well as on-premises networks, and we'll cover that later on in the course.
When you create a VPC, you have the option of picking default or dedicated dependency.
This controls whether the resources created inside the VPC are provisioned on shared hardware or dedicated hardware.
So be really careful with this option.
If you pick default, then you can choose on a per-resource basis later on when you provision resources as whether it goes on shared hardware or dedicated hardware.
If you pick dedicated tenancy at a VPC level, then that's locked in.
Any resources that you create inside that VPC have to be on dedicated hardware.
So you need to be really careful with this option because dedicated tenancy comes at a cost premium, and my rule on this is unless you really know that you require dedicated, then pick default, which is the default option.
Now, VPC can use IP version for private and public IPs.
The private side block is the main method of IP communication for the VPC.
So by default, everything uses these private addresses.
Public IPs are used when you want to make resources public, when you want them to communicate with the public internet or the AWS public zone, or you want to allow communication to them from the public internet.
Now, VPC is allocated one mandatory private IP version for side block.
This is configured when you create the VPC, which you'll see in a moment when we actually create a VPC.
Now, this primary block has two main restrictions.
It can be at its smallest, a /28 prefix, meaning the entire VPC has 16 IP addresses, and some of those can't be used.
More on that in the next lesson when I talk about subnet, though.
At the largest, a VPC can use a /16 prefix, which is 65,536 IDs.
Now, you can add secondary IP version for side blocks after creation, but by default, at the time of creating this lesson, there's a maximum of five of those, but they can be increased by using a support ticket.
But generally, when you're thinking conceptually about a VPC, just imagine that it's got a pool of private IP version 4 addresses, and optionally, it can use public addresses.
Now, another optional configuration is that a VPC can be configured to use IP version 6 by assigning a /56 IP V6 sider to the VPC.
Now, this is a feature set which is still being enjoyed, so not everything works with the same level of features as it does for IP version 4, but with the increasing worldwide usage of IP version 6, in most circumstances, you should start looking at applying an IP version 6 range as a default.
An important thing about IP version 6 is that the range is either allocated by AWS, as in you have no choice on which range to use, or you can select to use your own IP version 6 addresses, addresses which you own.
You can't pick a block like you can with IP version 4, either let AWS assign it or you use addresses that you own.
Now, IP version 6 IPs don't have the concept of private and public, the range of IP version 6 addresses that AWS uses are all publicly routable by default.
But if you do use them, you still have to explicitly allow connectivity to and from the public internet.
So don't worry about security concerns, it just removes an admin overhead because you don't need to worry about this distinction between public and private.
Now, AWS VPCs also have fully featured DNS.
It's provided by round 53, and inside the VPC, it's available on the base IP address of the VPC plus 2.
So the VPC is 10.0.0.0, and the DNS IP will be 10.0.0.2.
Now, there are two options which are critical for how DNS functions in a VPC, so I've highlighted both of them.
The first is a setting called enable DNS host names, and this indicates whether instances with public IP addresses in a VPC are given public DNS host names.
So if this is set to true, then instances do get public DNS host names.
If it's not set to true, they don't.
The second option is enable DNS support, and this indicates whether DNS is enabled or disabled in the VPC, so DNS resolution.
If it is enabled, then instances in the VPC can use the DNS IP address, so the VPC plus 2 IP address.
If this is set to false, then this is not available.
Now, why I mention both of these is if you do have any questions in the exam or any real world situations where you're having DNS issues, these two should be the first settings that you check, switched on or off as appropriate.
And in the demo part of this lesson, I'll show you where to access those.
Speaking of which, it's now time for the demo component of this lesson, and we're going to implement the framework of VPC for the Animals for Life organization together inside our AWS account.
So let's go ahead and finish the theory part of this lesson right now, and then in the next lesson, the demo part will implement this VPC together.
So, I'm going to start with the demo part. .
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, this is part two of this lesson.
We're going to continue immediately from the end of part one, so let's get started.
That's a good starting point for our plan.
Before I elaborate more on that plan though, let's think about VPC sizing and structure.
AWS provides some useful pointers on VPC sizing, which I'll link to in the lesson text, but I also want to talk about it briefly in this lesson.
They define micro as a /24 VPC with eight subnets inside it, each subnet as a /27, which means 27 IP addresses per subnet, and a total of 216.
This goes all the way through to extra large, which is a /16 VPC with 16 subnets inside, each of which is a /20, offering 4,091 IP addresses per subnet, for a total of just over 65,000.
And deciding which to use, there are two important questions.
First, how many subnets will you need in each VPC?
And second, how many IP addresses will you need in total, and how many IP addresses in each subnet?
Now deciding how many subnets to use, there's actually a method that I use all the time, which makes it easier.
Let's look at that next.
So this is the shell of a VPC, but you can't just use a VPC to launch services into.
That's not how it works in AWS.
Services use subnets, which where IP addresses are allocated from, VPC services run from within subnets, not directly from the VPC.
And if you remember, all the way back at the start of the course where I introduced VPCs and subnets, I mentioned that a subnet is located in one availability zone.
So the first decision point that you need to think about is how many availability zones your VPC will use.
This decision impacts high availability and resilience, and it depends somewhat on the region that the VPC is in, since some regions are limited in how many availability zones they have.
So we'll have three, so we'll have more.
So step one is to pick how many availability zones your VPC will use.
Now I'll spoil this and make it easy.
I always start with three as my default.
Why?
Because it will work in almost any region.
And I also always add a spare, because we all know at some point things grow, so I aim for at least one spare.
And this means there's a minimum for availability zones, A, B, C, and the spare.
If you think about it, that means that we have to at least split the VPC into at least four smaller networks.
So if we started with a /16, we would now have four /18s.
As well as the availability zones inside of VPC, we also have tiers, and tiers are the different types of infrastructure that are running inside that VPC.
We might have a web tier, an application tier, a database tier, that makes three, and you should always add buffer.
So my default is to start with four tiers, web, application, database, and a spare.
Now the tiers you think your architecture might be different, but my default for most designs is to issue three plus a spare, to web, application, database, and then a spare for future use.
If you only used one availability zone, then each tier would need its own subnet, meaning four subnets in total.
But we also have four AZs, and since we want to take full advantage of the resiliency provided by these AZs, we need the same base networking duplicated in each availability zone.
So each tier has its own subnet in each availability zone, for web subnets, for app subnets, for database subnets, and for spares, for a total of 16 subnets.
So if we chose a /16 for the VPC, that would mean that each of the 16 subnets would need to fit into that /16.
So a /16 VPC split into 16 subnets results in 16 smaller network ranges, each of which is a /20.
Remember, each time the prefix is increased, from 16 to 17, it creates two networks, from 16 to 18, it creates four, from 16 to 19, it creates eight, from 16 to 20, it creates 16 smaller networks.
Now that we know that we need 16 subnets, we could start with a /17 VPC, and then each subnet would be a /21, or we could start with a /18 VPC, and then each subnet would be a /22, and so on.
Now that you know the number of subnets, and because of that, the size of the subnets in relation to the VPC prefix size, picking the size of the VPC, is all about how much capacity you need.
Whenever prefix you pick for the VPC, the subnets will be four steps away.
So let's move on to the last part of this lesson, where we're going to be deciding exactly what to use.
Now, Animals for Life is a global organization already, but with what's happening environmentally around the world, the business could grow significantly, and so when designing the IP plans for the business, we need to assume a huge level of growth.
We've talked about a preference for the 10 range, but avoiding the common networks and avoiding Google, would give us a 10.16 to 10.127 to use as /16 networks.
We have five regions that we're going to be assuming the business will use, three to be chosen in the US, one in Europe, and one in Australia.
So if we start at 10.16 and break this down into segments, we could choose to use 10.16 to 10.31 as US Region 1, 10.32 to 10.47 as US Region 2, 10.48 to 10.63 as US Region 3, 10.64 to 10.79 as Europe, and 10.18 to 10.95 as Australia.
That is a total of 16 /16 network ranges for each region.
Now, we have a total of three accounts right now, general, prod, and dev, and let's add one more buffer, so that's four total accounts.
So if we break down those ranges that we've got for each region, break them down into four, one for each account, then each account in each region gets four /16 ranges, you know, for four VPCs per region per account.
So I've created this PDF and I've included this attach to this lesson and in this lessons folder on the course GitHub repository.
So if you go into VPC-basics, in there is a folder called VPC-Sizing and Structure, and then in this folder is a document called A4L for Animals for Life, underscore idplan.pdf, and this is that document.
So I've just tried to document here exactly what we've done with these different ranges.
So starting at the top here, we've blocked off all these networks, these are common ranges to avoid, and we're starting at 10.16 for Animals for Life, and then starting at 10.16, I've blocked off 16 /16 networks for each region.
So US Region 1, Region 2, Region 3, Europe, and Australia, and then we're left with some of the renewed and they're reserved.
After that, of course, from 10.1 to 8 onwards, that's reserved for the Google Cloud usage, which we're uncertain about.
So all the way to the end, that's blocked off.
And then within each region, we've got three A.L.
US accounts that we know about, general, prod, and dev, and then one set for reserved future use.
So in the region, each of those accounts has four Class B networks, enough for four non-overlapping VPCs.
So feel free to look through this document, I've included the PDF and the original A.L. numbers document, so feel free to use this, adjust this for your network, and just experiment with some IP planning.
But this is the type of document that I'll be using as a starting point for any large A.L.
US deployments.
I'm going to be using this throughout this course to plan the IP address ranges whenever we're creating a VPC.
We obviously won't be using all of them, but we will be using this as a foundation.
Now based on that plan, that means we have a /16 range to use for each VPC in each account in each region, and these are non-overlapping.
Now I'm going to be using the VPC structure that I've demonstrated earlier in this lesson, so we'll be assuming the usage of three availability zones plus a spare, and three application tiers plus a spare.
And this means that each VPC is broken down into a total of 16 subnets, and each of those subnets is a /20 subnet, which represents 4,091 IP addresses per subnet.
Now this might seem excessive, but we have to assume the highest possible growth potential for animals for life.
We've got the potential growth of the business, we've got the current situation with the environment, and the raising profile of animal welfare globally, so there is a potential that this business could grow rapidly.
This process might seem vague and abstract, but it's something that you'll need to do every time you create a well-designed environment in A.L.
US.
You'll consider the business needs, you'll avoid the ranges that you can't use, you'll allocate the remainder based on your business's physical or logical layout, and then you'll decide upon and create the VPC and subnet structure from there.
You'll always work either top-down or bottom-up.
You can start with the minimum subnet size that you need and work up, or start with the business requirements and work down.
When we start creating VPCs and services from now on in the course, we will be using this structure, and so I will be referring back to this lesson and that PDF document constantly, so you might want to save it somewhere safe or print it out, make sure you've got a copy handy because we will be referring back to it constantly as we're deciding upon our network topology throughout the course.
With that being said, though, that's everything I wanted to cover in this lesson.
I hope it's been useful and I hope it's been a little bit abstract, but I wanted to step you through the process that a real-world solutions architect would use when deciding on the size of subnets and the VPCs, as well as the different structure these network components would have in relation to each other's IEP plan.
But at this point, that is it with the abstract theory.
From this point onward in this section of the course, we're going to start talking about the technical aspects of AWS private networking, starting with VPCs and VPC subnets.
So go ahead, complete this video, and when you're ready, you can move on to next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I'm going to cover a topic that many courses don't bother with.
How to design a well-structured and scalable network inside AWS using a VPC.
Now, this lesson isn't about the technical side of VPC.
It's about how to design an IP plan for a business, which includes how to design an individual network within that plan, which when running in AWS means designing a VPC.
So let's get started and take a look because this is really important to understand, especially if you're looking to design real-world solutions or if you're looking to identify any problems or performance issues in exam questions.
Now, during this section of the course, you'll be learning about and creating a custom VPC, a private network inside AWS.
When creating a VPC, one of the first things you'll need to decide on is the IP range that the VPC will use, the VPC SIDA.
You can add more than one, but if you take architecture seriously, you need to know what range the VPC will use in advance.
Even if that range is made up of multiple smaller ranges, you need to think about this stuff in advance.
Deciding on an IP plan and VPC structure in advance is one of the most critically important things you will do as a solutions architect, because it's not easy to change later and it will cause you a world of pain if you don't get it right.
Now, when you start this design process, there are a few things that you need to keep in mind.
First, what size should the VPC be?
This influences how many things, how many services can fit into that VPC.
Each service has one or more IPs and they occupy the space inside a VPC.
Secondly, you need to consider all the networks that you'll use or that you'll need to interact with.
In the previous lesson, I mentioned that overlapping or duplicate ranges would make network communication difficult, so choosing widely at this stage is essential.
Be mindful about ranges that other VPCs use, ranges which are utilized in other cloud environments, on other on-premises networks, and even partners and vendors.
Try to avoid ranges which other parties use which you might need to interact with and be cautious.
If in doubt, assume the worst.
You should also aim to predict what could happen in the future.
What the situation is now is important, but we all know that things change, so consider what things could be like in the future.
You also need to consider the structure of the VPC.
For a given ID range that we allocate to a VPC, it will need to be broken down further.
Every IT network will have tiers, Web tier, Application tier and Database tier are three common examples, but there are more, and these will depend on your exact IT architecture.
Tears are things which separate application components and allow different security to be applied for example.
Modern IT systems also have different resiliency zones, known as Availability zones in AWS.
Networks are often split, and parts of that network are assigned to each of these zones.
These are my starting points for any systems design.
As you can see, it goes beyond the technical considerations, and rightfully so, a good solid infrastructure platform is just as much about a good design as it is about a good technical implementation.
So since this course is structured around a scenario, what do we know about the Animals for Life organization so far?
We know that the organization has three major offices, London, New York and Seattle.
That will be three IP address ranges which we know are required for our global network.
We don't know what those networks are yet, but as Solutions Architects we can find out by talking to the IT staff of the business.
We know that the organization have field workers who are distributed globally, and so they'll consume services from a range of locations.
But how will they connect to the business?
Will they access services via web apps?
Will they connect to the business networks using a virtual private network or VPN?
We don't know, but again, we can ask the question to get this information.
What we do know is that the business has three networks which already exist.
192.168.10.0/24 which is the business's on-premise network in Brisbane.
10.0.0.0/16 which is the network used by an existing AWS pilot.
And finally, 172.31.0.0/16 which is used in an existing Azure pilot.
These are all ranges our new AWS network design cannot use and also cannot overlap with.
We might need to access data in these networks, we might need to migrate data from these networks, or in the case of the on-premises network it will need to access our new AWS deployment.
So we have to avoid these three ranges.
And this information that we have here is our starting point, but we can obtain more by asking the business.
Based on what we already know, we have to avoid 192.168.10.0/24, we have to avoid 10.0.0.0/16, and we have to avoid 172.31.0.0/16.
These are confirmed networks that are already in use.
And let's also assume that we've contacted the business and identified that the other on-premises networks which are in use by the business, 192.168.15.0/24 is used by the London office, 192.168.20.0/24 is used by the New York office, and 192.168.25.0/24 is used by the Seattle office.
We've also received some disturbing news.
The vendor who previously helped Animals for Life with their Google Cloud approval concept cannot confirm which networks are in use in Google Cloud, but what they have told us is that the default range is 10.128.0.0/9, and this is a huge amount of IP address space.
It starts at 10.128.0.0 and runs all the way through to 10.255.255.255, and so we can't use any of that if we're trying to be safe, which we are.
So this list would be my starting point.
When I'm designing an IP addressing plan for this business, I would not use any of this IP address space.
Now I want you to take a moment, pause the video if needed, and make sure you understand why each of these ranges can't be used.
Start trying to become familiar with how the network address and the prefix map onto the range of addresses that the network uses.
You know that the IP address represents the start of that range.
Can you start to see how the prefix helps you understand the end of that range?
Now with the bottom example for Google, remember that a /8 is one fixed value for the first octet of the IP, and then anything else.
Google's default uses /9, which is half of that.
So it starts at 10.128 and uses the remainder of that 10.space, so 10.128 through to 10.255.
And also an interesting fact, the Azure network is using the same IP address range as the AWS default VPC users.
So 172.31.0.0, and that means that we can't use the default VPC for anything production, which is fine because as I talked about earlier in the course, as architects, where possible, we avoid using the default VPC.
So at this point, if this was a production process, if we were really designing this for a real organization, we'd be starting to get a picture of what to avoid.
So now it's time to focus on what to pick.
Now there is a limit on VPC sizing in AWS.
A VPC can be at the smallest /28 network, so that's 16 IP addresses in total.
And at most, it can be a /16 network, which is just over 65,000 IP addresses.
Now I do have a personal preference, which is to use networks in the 10 range, so 10.x.y.z.
And given the maximum VPC size, this means that each of these /16 networks in this range would be 10.1, 10.2, 10.3, all the way through to 10.255.
I also find it important to avoid common ranges.
In my experience, this is logically 10.0 because everybody uses that as a default, and 10.1 because as human beings, everybody picks that one to avoid 10.0.
I'd also avoid anything up to and including 10.10 to be safe, and just because I like base 2 numbers, I would suggest a starting point of 10.16.
With this starting point in mind, we need to start thinking about the IP plan for the animals for life business.
We need to consider the number of networks that the business will need, because we'll allocate these networks starting from this 10.16.10 range.
Now the way I normally determine how many ranges a business requires is I like to start thinking about how many AWS regions the business will operate in.
Be cautious here and think of the highest possible number of regions that a business could ever operate in, and then add a few as a buffer.
At this point, we're going to be pre-allocating things in our IP plan, so caution is the term of the day.
I suggest ensuring that you have at least two ranges which can be used in each region in each AWS account that your business uses.
For animals for life, we really don't yet know how many regions the business will be operating in, but we can make an educated guess and then add some buffer to protect us against any growth.
Let's assume that the maximum number of regions the business will use is three regions in the US, one in Europe, and one in Australia.
That's a total of five regions.
We want to have two ranges in each region, so that's a total of five times two, so 10 ranges.
And we also need to make sure that we've got enough for all of our AWS accounts, so I'm going to assume four AWS accounts.
That's a total number of ID ranges of two in each of five regions, so that's 10, and then that in each of four accounts, so that's a total of ideally 40 ID ranges.
So to summarise where we are, we're going to use the 10 range.
We're going to avoid 10.0 to 10.10 because they're far too common.
We're going to start at 10.16 because that's a nice, clean, based-to number, and we can't use 10.128 through to 10.255 because potentially that's used by Google Cloud.
So that gives us a range of possibilities from 10.16 to 10.127 inclusive, which we can use to create our networks.
And that's plenty.
Okay, so this is the end of part one of this lesson.
It's getting a little bit on the long side, and so I wanted to add a break.
So that's it.
I'm going to take a little bit of time to get to the end of part one.
So I'm going to take a little bit of time to get to the end of part one.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this lesson where I'm going to be talking about S3 access points, which is a feature of S3, which improves the manageability of S3 buckets, especially when you have buckets which are used by many different teams or users, or when buckets store objects with a wide range of functions.
Now we have a good amount to cover, so let's just jump in and get started.
S3 access points simplify the process of managing access to S3 buckets and objects.
I want you to imagine an S3 bucket with billions of objects using many different prefixes.
Imagine this bucket is accessed by hundreds of different teams within business.
Now by default you would have one bucket with one really complex bucket policy.
It would be hard to manage and prone to errors.
Access points allow you to conceptually split this.
You can create many access points for a bucket and each of these can have different policies, so different access controls from a permissions perspective.
But also, each access point can be limited in terms of where they can be accessed from.
So for example, a VPC or the internet.
Every access point has its own endpoint address and these can be given to different teams.
So rather than using the default endpoint for S3 and accessing the bucket as a whole, users can use a specifically created access point, along with that specific endpoint address, and get access to part of that bucket or the whole bucket, but with certain restrictions.
Now you can create access points using either the console UI, or you can use the CLI or ABI using create-access-point.
And it's important that you remember this command.
Please try and remember create-access-point.
Now it's going to be easier for you to understand this architecture if we look at it visually.
Let's explore...
Or how everything fits together using an architecture diagram.
And we're going to use a typical example.
Let's say that we have an S3 bucket and this bucket stores sensitive health information for the animals for life organization.
Now this includes health information about the animals, but also the staff working for the business, such as medical conditions and any vaccination status.
Now we have three different sets of staff.
We have admin staff, field workers and vet staff.
The admin staff look after the organization's employees.
The field workers actually visit other countries doing wildlife studies and helping animals.
And then the vet staff look after the medical needs of any animals which the business takes care of.
Now in this example, the business are also using a VPC, with some instances and other resources performing data analysis functions.
If we weren't able to use access points, then we'd have to manage the bucket as a monolithic entity, managing a large and complex bucket policy to control access for identities within this account and potentially other accounts.
And this can become unwieldy very quickly.
One option that we have is to use access points.
And you can sexually think of these as mini-buckets or views on the bucket.
So we might have three access points for our three types of users and then one for the VPC.
Now each of these access points will have a unique DNS address for accessing it and it's this DNS address that we would give to our staff.
But more than this, each access point also has its own policy.
And you can think about this as functionally equivalent to a bucket policy, in that it controls access to the objects in a bucket when using that access point.
So this can control access to certain objects, prefixes or certain tags on objects.
And so it's super powerful.
So now we have a unique DNS name and a unique policy.
Essentially we have mini-buckets which are independently controlled.
And this makes it much easier to manage the different use cases for the main bucket and our staff can access it via the access points.
From the VPC side, access points can be set to only allow a VPC origin, which means that the access point is tied to a specific VPC.
This will need a VPC endpoint in the VPC and the two can be tied together so that the S3 endpoint in the VPC can be configured to only allow access via the S3 access point.
Now one really crucial thing to understand permissions-wise is that any permissions defined on an access point need to be also defined on the bucket policy.
So in this example, if the...
That's Stafford Grant.
And an access via an access point policy, then the same would need to also be granted via the bucket policy.
Now you can do delegation where on the bucket policy you grant wide open access via the access point.
So as long as the access point is used, any action on that bucket's objects is allowed.
And then you'll be fine, more granular control over access to objects in that bucket using the access point policies.
And that's a pretty common permissions architecture to make things simpler to manage.
Now I've included a link attached to this lesson with more details on permissions delegation together with some example access point policies.
You won't need this for the exam, but if you do want to do a bit of extra reading, do make sure you check out the links included with this lesson.
Now at this point, that's everything I wanted to cover.
You just need to have an overview of how this works.
At this point, that's everything.
So thanks for watching.
Go ahead and complete this video.
And when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to quickly cover S3 access logs which is a feature available within S3 which you need to be aware of.
Now it's pretty simple to understand and so we're going to cover it pretty quickly.
So let's jump in and get started.
The concept of access logging is simple enough.
We have in this example a source bucket and a target bucket.
Now the source bucket is what we want to gain visibility up.
So we want to understand what types of accesses are occurring to this source bucket.
And the target bucket is where we want the logging to go.
To use the feature we have to enable logging on the source bucket and this can be done either using the console UI or using a put bucket logging operation using the CLI or the API.
Now logging is managed by a system known as the S3 log delivery group which reads the logging configuration which you set on the source bucket.
It's important at this point to understand that this is a best efforts process.
Either enabling this feature or making changes in configuration can take a few hours to take effect.
To use this feature you also need to give the log delivery group access to the target bucket and this is done using an ACL on the target bucket.
You add the S3 log delivery group giving it right access and this is how it can deliver the logs through the target bucket.
Now logs are delivered as log files and each file consists of a number of log records and these and new line are limited.
Each record consists of attributes such as date and time, the requester, the operation, status codes, error codes and much more.
If you've seen an Apache log file then these are very similar.
Now each attribute within a record is spaced unlimited so the records within a file are new line limited and the attributes within a record are spaced limited.
Now a single target bucket can be used for many source buckets and you can separate these easily using prefixes in the target bucket and this is something that's configured within the logging configuration that's set on the source bucket.
Access logging provides detailed information about the requests which are made to a source bucket and they're useful for many different applications, most commonly security functions and any access audits and it can also help you to understand the access patterns of your customer base and understand any charges on your S3 bill.
If you do use this feature then you need to personally manage the lifecycle or deletion of any of the log files.
This is not built into the product so you need to manage either the movement of these log files to different storage classes or deleting them after a certain amount of time.
Now that's it for the architecture.
If you're doing a course where you do need practical experience then I'll be following this up with a demo.
If not then this theory is all that you'll need.
At this point, thanks for watching.
Go ahead and complete this video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this lesson where I want to step through the event notification feature of S3.
This is a feature which allows you to create event notification configurations on a bucket.
So let's jump in and get started exploring the architecture of this feature and exactly how we can use it.
The feature is pretty simple to understand.
When enabled, a notification is generated when a certain thing occurs within a bucket.
And these can be delivered to different destinations, including SNS topics, SQSQs or LAN functions.
And this means that you can have event-driven processes which occur as a result of things happening within S3.
Now, there is different types of events supported.
For example, you can generate event notifications when objects are created, which means put, post, copy and when long multi-part upload operations complete.
Maybe you want to do something like take an image and add a watermark or do something crazy like generating a retro pixel art version of any images uploaded to a bucket.
You can create an event notification configuration, set it so that it triggers whenever an object is created, send this to some destination to process the image, and then you have an event-driven automated workflow.
You can also set event notifications to trigger on object deletion so you can match any type of deletion using the Star Wild card.
You can match delete operations or when delete markers are created.
And you might use this if you want an automated security system to react if any objects are removed from any of your S3 buckets.
You can also have it trigger for object restores, so if you have objects in S3, Glacier or Glacierd Part 5 and you perform a restore operation, you can be notified when it starts and completes.
And this can be useful if you want to notify customers or staff when a restore begins and ends.
Then finally, you can get notifications relating to replication.
So if operations miss the 15-minute threshold, if they're replicated after the threshold, when an object is no longer tracked, and even if an object fails replication, all of those things can generate an event notification using this product.
Now, visually, it looks like...
Like this.
We start with an S3 bucket and define... ...an event notification configuration.
The configuration matches events which we want to be notified of.
Overtide these events occur which, as I just talked about, include create events, delete events, restore events and replicate events.
And when these things occur, events are generated and sent to the destinations which you configure.
And at the time of creating this lesson, that's Lambda, SQSQs and SNS topics.
These events interact with those services.
Events are generated by the S3 service known as the S3 principle.
And so we need to also add resource policies onto each of those destination services allowing the S3 service to interact with them.
This could be in the form of an SQSQ policy or a Lambda resource policy.
And at the time of creating this lesson, the only way to modify Lambda resource policies is to use the CLI or the API.
Although this might change over time.
The events themselves are actually JSON objects.
And this is a cut-down version just to give you an idea of what they look like.
Now, there are bits missing, I know, but you'll get the general idea.
If you use Lambda, for example, this is received within the event structure and you'll need to pass that, extract the information that you need and then act on it.
Now, S3 event notifications are actually a relatively old feature and support only a limited set of things occurring on objects in S3 buckets and can only interact with a limited number of AWS services.
You can also use EventBridge, which supports more types of events and can integrate with a wider range of AWS services.
As always with AWS, you have a few different ways of doing the same thing.
Now, as a default, I would tend to lean towards using EventBridge unless you have a specific reason not to do so.
But I did want to make sure that you understand the event notification feature of S3.
Now, at this point, that's everything I wanted to cover.
So go ahead and complete this video.
And when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back in this lesson, I want to cover a really important feature of S3 in Glacier that you need to be aware of for the exam.
Now you don't need to understand the implementation just the architecture.
S3 Select and Glacier Select are ways that you can retrieve parts of objects rather than the entire object.
Now I expect that this will feature only in the minor way in the exam and it will be focused on architecture and features but I do want you to be aware of exactly how this feature works.
So let's jump in and step through this architecture and what benefits it provides.
Now you know by now that both S3 and Glacier are super scalable services.
You can use them both to store huge quantities of data.
S3 for example can store objects up to five terabytes in size and can store an infinite number of those objects.
Now often when you're interacting with objects inside S3 you intentionally want to interact with that full object so you might want to retrieve that full five terabyte object.
What's critical to understand as a solutions architect is that logically if you retrieve a five terabyte object then it takes time and it consumes that full five terabytes of transfer.
So if you're downloading a five terabyte object from S3 into your application then you consume five terabytes of data you're accessing that full object and it takes time.
Now you can filter this on the client side but this occurs after the figurative damage is done.
You've already consumed that capacity, you've already downloaded that data, filtering it at the client side just means throwing away the data that you don't need.
S3 and Glacia provide services which allow you to access partial objects so that's what's provided by the S3 select and Glacia select services and the way that you do this is that both services allow you to create a SQL like statement so cut down SQL statement.
So you create this, you supply it to that service and then the service uses this SQL like statement to select part of that object and this part and only this part is sent to the client in a pre-filtered way so you only consume the pre-filtered part of that object, the part that you select so it's faster and it's cheaper.
Now both S3 select and Glacia select allow you to operate on a number of file formats with this level of functionality so examples of this include comma separated values JSON can even use visa to compression comma separated values and for JSON so it's a really flexible service.
Now visually this is how it looks.
Now at the top we have the architecture without using S3 or Glacia select and at the bottom we have the architecture when we utilize these services so in both cases we have an application which stores its data on S3.
So when an application interacts with S3 to retrieve any data the entire object or stream of objects are delivered to the application so the application receives everything.
It can either accept it as a whole or it can perform its own filtering but the critical thing to understand is that any filtering performed by the application is performed inside the application.
It doesn't impact the cost or performance.
The only data which is filtered out is simply discarded but it's still billed for and it still takes time.
Now contrast this to using S3 select with the same architecture so we still have the same application that interacts with the same S3 bucket.
It places the filter point inside the S3 service itself and this allows us to use a SQL-like expression and provide this to the S3 select service.
The S3 select service can then use the SQL-like expression and it can apply this to the raw data in S3 so in effect it's taking the raw data filtering it down but this is occurring inside the S3 service so the data that the application receives is the pre-filtered data and this means that we can achieve faster speeds and a significant reduction in cost.
Once the filtering occurs before it's transferred to our application it means that we get substantial benefits both in speed and in cost and this is because the S3 service doesn't have to load all of the data and deliver it to the application.
We're applying this filter at the source, the source of the data which is the S3 service itself.
Now this is a feature which our applications will need to explicitly use but as a solutions architect it's a powerful feature that you need to understand to improve the levels of performance in any systems that you design.
Now at this point that's everything I wanted to cover on to keep it brief because it's a product that I only expect to feature in a very very minor way in the exam and I do want you to be aware of its existence.
So thanks for watching, go ahead, complete this video and when you're ready I look forward to you joining me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
And in this lesson, I want to talk about S3 Object Lock.
Now, this is something which is really important to understand for the SysOps certification, but equally, if you're developing applications running in AWS or architecting solutions for AWS, you also need to have an awareness.
Now, we've got a fair amount to cover, so let's jump in and get started.
S3 Object Lock is actually a group of related features which I'll talk about in this lesson, but it's something that you enable on new S3 buckets.
If you want to turn on Object Lock for existing buckets, then you need to contact AWS support.
And this is probably going to change over time, but at the time of creating this lesson, this is the current situation.
Now, when you create a bucket with Object Lock enabled, versioning is also enabled on that bucket.
Once you create a bucket with Object Lock and enable it, you cannot disable Object Lock or suspend versioning on that bucket.
Object Lock implements a right once, read many architecture, and this is known as Worm.
It means that you can set it so that Object Versions once created can't be overwritten or deleted.
And just to reiterate, since this is a pretty important point, the feature requires versioning to be enabled on a bucket.
And because of this, it's actually individual Object Versions which are locked.
Now, when we talk about Object Lock, there are actually two ways it manages Object Retention.
Retention periods and legal holds.
An Object Version can have both of these, one or the other or none of them.
And it's really important for the exam and for the real world to think of these as two different things.
So, S3 Object Lock Retention and S3 Object Lock Legal Hold.
Now, just like with bucket default encryption settings, these can be defined on individual Object Versions, or you can define bucket defaults for all of the Object Lock features.
Now, this is just the feature at the high level.
Next, I want to quickly step through the key points of the different retention methods.
So, Retention Period and Legal Hold.
And we're going to start with Retention Period.
With the Retention Period style of object locking, when you create the Object Lock, you specify a Retention Period in days and/or years.
One year means the Retention Period will end one year from when it's applied.
Now, there are two modes of Retention Period Lock which you can apply.
And it's really, really important that you understand how these work and the differences between the two.
One, because it matters for the exam, and two, because if you get it wrong, it will cause a world of pain.
The first mode is Compliance Mode.
If you set a Retention Period on an object using Compliance Mode, it means that an Object Version cannot be deleted or overwritten for the duration of the Retention Period.
But, it also means that the Retention Period itself cannot be reduced and the Retention Mode cannot be adjusted during the Retention Period.
So, no changes at all to the Object Version or Retention Period settings.
And this even includes the account root user.
So, no identity in the account can make any changes to Object Versions, delete Object Versions, or change the Retention Settings until the Retention Period expires.
So, this is serious business.
This is the most strict form of Object Lock.
Don't set this unless you really want that Object Version to stay around in its current form until the Retention Period expires.
Now, you've used this mode as the name suggests for compliance reasons.
An example of this might be medical or financial data.
If you have compliance laws stating that you have to keep data, for example, for three years, with no exceptions, then this is the mode that you set.
Now, a less strict version of this is Governance Mode.
With this mode, you still set a Retention Period, and while active, the Object Version cannot be deleted or changed in any way.
But you can grant special permissions to allow these to be changed.
So, if you want a specific group of identities to be able to change settings and Object Versions, then you can provide them with the permission S3 colon Bypass Governance Retention.
And as long as they have that permission and they provide a header along with their request, which is X-AMZ-BYPASS-GOVERNANCE-RETENTION, then they can override the Governance Mode of Retention.
Now, an important point to understand is this last header, so X-AMZ-HIVEN, and then all the rest.
This is actually the default for the console UI.
And so, using the console UI, you have the S3 colon Bypass Governance Retention, you will be able to make changes to Governance Mode Retention Locks.
So, Governance Mode is useful for a few things.
One, if you want to prevent accidental deletion.
Two, if you have process reasons or governance reasons to keep Object Versions.
Or lastly, you might use it as a test of settings before picking the Compliance Mode.
So, that's Governance Mode.
These are both modes that can be used when using the Retention Period feature of S3 Object Locking.
So, please make sure you understand how they both work and the differences between the two before we finish with this lesson.
It's really, really critical that you understand.
Now, the last overview that I want to give is S3 Object Lock Legal Hold.
With this type, you don't actually set a Retention Period at all.
Instead, for an Object Version, you set Legal Hold to be on or off.
To repeat, there's no concept of retention.
This is a binary.
It's either on or off.
While Legal Hold is enabled on an Object Version, you can't delete or overwrite that specific Object Version.
An extra permission is required, which is S3 colon Put Object Legal Hold.
And this is required if you want to add or remove the Legal Hold feature.
And this type of Object Locking can be used to prevent accidental deletions of Object Versions, or for actual legal situations when you need to flag specific Object Versions as critical for a given case or a project.
Now, this point, let's take a moment to summarise and look visually at how all these different features work.
Let's start with Legal Hold.
We start with a normal Object and we upload it to a bucket with a setting Legal Hold Status to On.
And this means that the Object Version is locked until the Legal Hold is removed.
In this state, the Object Version can't be deleted or changed, but you can set the Legal Hold Status to Off, at which point normal Commissions apply and the Object Version can be deleted or replaced as required.
It's a binary.
It's either on or off, and that isn't the concept of Retention Period.
Next, we have the S3 Object Locks that use the Retention Period architecture.
First, we have Governance, so we put an Object into a bucket with a Lock Configuration of Governance and specify a Retention Period.
This creates a locked Object Version for a given number of days or years, and while it's in this state, it cannot be deleted or updated.
With Governance Mode, this can be bypassed if you have the permissions and specify the correct header.
And once again, this header is the default in the console, so you can adjust or remove the Object Lock or delete or replace the Object Version.
So the important thing to realise here is while an Object is locked for a given Retention Period using the Governance Mode, you can't make any changes to Object Versions or delete them, but you can be provided with the S3 colon bypass and Governance Retention Permission, and as long as you have that and specify the X-AMZ-VIPAS-GOVERNANCE-RETENTION-TRUEHEADER, then you can override the Governance Mode Object Lock during the Retention Period.
Then lastly, we have Compliance, which is the same architecture.
We upload an Object.
We specify Compliance Mode together with the Retention Period, and this creates an Object Version, which is locked for a certain period in days and years.
The difference though is that this can't be changed.
An Object Version can't be deleted or updated.
The Retention Period cannot be shortened.
The Compliance Model can't be changed to something else even by the account root user.
This is permanent.
Only once the Retention Period expires can the Object Version or the Retention Settings be updated.
And for all of these, they can be set on Object Versions or what defaults can be fine.
And that's the architecture of S3 Object Lock.
It's critical that you understand this.
If it takes a few watches at this lesson, then that's OK.
Make sure you understand it in detail, including how each type differs from the others.
And remember, they can be used in conjunction with each other, so the effects can overlap.
You might use Legal Hold together with either Governance or Compliance, and if you do, then the effects of this could overlap, so you need to understand all of this in detail.
But at this point, that's everything I wanted to cover in this lesson, so go ahead and complete the video.
I mean, you ready?
I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this mini project where you're going to get the experience of creating S3 multi-region access points.
Now multi-region access points give you the ability to create a single S3 global endpoint and point this at multiple S3 buckets.
It's an effective way to use a single endpoint throughout requests to the closest S3 service.
Now in order to do this mini project you need to be logged in to an AWS account with admin permissions.
If you're using one of my courses then you should use the IAM admin user of the general AWS account which is the management account of the organization.
If you're not using my courses make sure you're using an identity with admin permissions.
You'll also need to select two different AWS regions before this mini project we're going to create S3 buckets in two regions.
I'm going to use AP, South East 2 or the Sydney region and CA Central 1 or the Canada region.
Now the first thing to do is to move to the S3 console so type S3 in the search box at the top and then open that in a new tab.
Once you're there we're going to create two buckets so first go ahead and click on create bucket.
Now we'll keep the bucket naming consistent so we'll use multi-heiven region -demo -heiven and then the region that you're in.
So in my case Sydney and then at the end I want you to append on a random number.
In my case 1-3-3-7.
Remember S3 bucket names need to be globally unique and this will ensure both of our buckets are.
Once you put the name make sure you set the region correctly everything else can be left as default apart from we need to enable bucket versioning.
So set this box under versioning to enable.
Now scroll to the bottom and click on create bucket.
Then we need to follow that same process again for the second bucket.
So click on create bucket.
Use the same bucket naming so multi-heiven region -demo -heiven and then the region name in this case Canada.
And make sure you append your random number and set the region.
Then scroll down, enable bucket versioning again and create the bucket.
Now once you've got these two buckets we're going to create the multi-region access point.
So click on multi-region access point on the menu on the left and click create multi-region access point.
For the name you can pick whatever you want it doesn't need to be globally unique only unique within an AWS account.
I'm going to pick really really critical cat data and then scroll down and add the buckets that you've just created.
These can't be added or edited after creation so we need to do it now.
Now we're going to add buckets.
Select the two buckets and then click on add buckets to confirm.
Once you've done that scroll down to the bottom and click create multi-region access point.
Now this process can take worst case up to 24 hours to complete but typically it creates much faster, generally around 10 to 30 minutes.
Now we do need this to be created before we continue.
So go ahead and pause the video, wait for the status on this to change to ready and then you're good to continue.
Okay so now that we've got this multi-region access point configured and it's ready to go.
Now that we've got this multi-region access point configured we need to configure replication between the two buckets because anyone using this multi-region access point will be directed to the closest S3 bucket and we need to make sure that the data in both matches.
So to do that go ahead and click on the multi-region access point name and go inside there and you'll see that the access point has an Amazon resource name as well as an alias.
Now you should probably note down the Amazon resource name because we might need it later on.
Once you've done that click on the replication and failover tab and you'll be able to see a graphical representation of any replication or failover configuration.
If we click on the replication tab you'll see there's no replication configured.
If we click on the failover tab you can see that we've got these two S3 buckets in different AWS regions configured as an active active failover configuration which means any requests made to this multi-region access point will be delivered to either of these S3 buckets as long as they're available.
Now we can click on one and click on edit routing status and configure it as passive which means it will only be used if no active buckets exist.
But in our case we want it to be active active so we'll leave both of these set to active.
Now we want to configure replication between the buckets so we're going to scroll down to replication rules and click create replication rule.
Now there are two templates available to start with, replicate objects amongst all specified buckets and replicate objects from one or more source buckets to one or more destination buckets.
Now which of these you pick depends on the architecture that you're using but because we have an active active configuration we want all the buckets to be the same.
So we're going to pick the replicate objects among all specified bucket template so this is replicating between every bucket and every other bucket.
Essentially it creates a set of buckets which contain exactly the same data all fronted by a multi-region access point.
So go ahead and make sure this template is selected and then click to select both of the buckets that you created.
In my case Sydney and Canada.
Once we've done that scroll down you can set whether you want the status to be enabled or disabled when created we're going to choose enable and you get to adjust the scope so you can either have it configured so that you can replicate objects using one or more filters or you can apply to all objects in the bucket.
Now we want to make sure the entire bucket is replicated so we're going to use apply to all objects in the bucket.
Now you're informed that an Ion role or roles will be generated based on your configuration and this will provide S3 with the permissions that it needs to replicate objects between the buckets.
Now this is informational we don't need to do anything so let's move on.
Now you're also told what encryption settings are used as well as the destination storage class so because of the template that we picked above we don't get to change the destination storage class and that's okay.
If we scroll down to the bottom we have additional replication options, we have replication time control which applies in SLA to the replication process, we have replication metrics and notifications to provide additional rich information and we can choose whether to replicate the lead markers and whether to replicate modifications.
Now for this mini project we're not going to use replication time control we don't need that level of SLA.
We are going to make sure that replication metrics and notifications is selected.
We don't want to replicate the lead markers and we do want to make sure that replica modifications sync is checked.
So we only want replication metrics and notifications and replica modifications sync.
So make sure that both of those are checked and then click on create replication roles.
Now at this point all the buckets within this multi-region access point are now replicating with each other.
In our case it's only the two, in my case it's Canada and Sydney.
So go ahead and click on close and we can see how this graphical representation has changed showing us that we now have two-way replication in my case between Sydney and Canada.
Now at this point we need to test out the multi-region access point and rather than having you configure your local command line interface we're going to do that with Cloud Shell.
Now what I want you to do is to go ahead and move to a different AWS region so not the same AWS regions that either of your buckets are created in.
What I do want you to do though is make a region close to one of your buckets.
Now I'm going to start off with Sydney and in order to test this I'm going to switch across to the Tokyo region which is relatively close to Sydney, at least from a global perspective.
So I'm going to click on the region drop down at the top and change it from Sydney to Tokyo.
And what's on there I'm going to click on this icon which starts the Cloud Shell.
If this is the first time you're using it in this region you'll probably get the welcome to AWS Cloud Shell notification.
Just either click on close or check this box and then click on close if you don't want to see this notification again.
Now all these commands that we're going to be running are in the instructions which are attached to this video.
The first thing that we're going to do is to create a test file that we're going to upload to S3.
We're going to do that using the DD command.
So we're going to have an input of /dev/urandom which just gives us a stream of random data.
And then for the output using the OF option we're going to create a file called test1.file.
This is going to have a block size of 1 meg and a count of 10 which means that it's going to create a 10 meg file called test1.file.
So run that command.
Now once you've done that just go back to the tab that you've got open to S3.
Scroll to the top, click on multi-region access points, check the access point that you've created and then just click on copy ARN to copy the ARN for this access point into your clipboard and then go back to Cloud Shell.
Next I'm going to do an LS making sure I just have a file created within Cloud Shell, I do.
And now I'm going to run this command so aws space S3, space CP for copy, space test1.file.
So this is the local file we created within Cloud Shell and then space and then S3 colon, double forward slash and then the ARN of the multi-region access point.
Now this command is going to copy the file that we created to this multi-region access point and this multi-region access point is going to direct us towards the closest S3 location that it serves which should be the bucket within the Sydney region.
So go ahead and run that command.
It's going to take a few moments to upload but when it does switch back to S3, go to buckets.
In my case I'm going to go to the Sydney bucket and I should see the file created in this bucket.
I do, that's good, so I'm going to go back to buckets and go to Canada and I don't yet see the object created in the Canada bucket and that's because replication can take a few minutes to replicate from the bucket where the object was stored through to the destination bucket.
If we just give this a few moments and keep hitting refresh, after a few moments we should see the same S1.file which has been replicated from the Sydney region through to the Canada region.
Now S3 replication isn't guaranteed to complete in a set time, especially if you haven't configured the replication time control option.
So it's fairly normal to see a small delay between when the object gets written to one bucket and when it's replicated to another.
Now we're going to try this with a different region, so go back to Cloud Shell and then click on the region drop down and we're going to pick a region which is close to our other bucket but not in the same region.
So the other bucket is created in the Canada region, so I'm going to pick a close region.
In this case I'm going to pick US East 2 which is in Ohio.
Once I've done that I'm going to go back to Cloud Shell.
Once I've done that I should be in Cloud Shell in a different AWS region.
So I'll need to recreate the test file.
In this case I'm going to call it test2.file and I'm going to use all the same options.
And again this command is contained in the instructions attached to this video.
So run that command and it will take a few moments to complete and then we're going to follow the same process.
We're going to upload this file to our S3 buckets using the multi-region access point.
So again just make sure you've got the ARN for the access point in your clipboard and then in the Cloud Shell type AWS space S3, space CP, space test2.file, space S3, colon, forward slash forward slash and then the ARN of the multi-region access point.
Again we're going to press enter, wait for this to upload and then check the buckets.
So run that command and then go back to S3, go to buckets.
I'm going to go to the bucket in Canada first and I'm going to hit refresh and we should see test2.file in this bucket which is good.
Then go to buckets and go to Sydney and we probably won't see that file just yet because it will take a few moments to replicate.
So keep hitting refresh and eventually you should see test2.file arrives in the S3 bucket.
Now this time we're going to run the same test but we're going to pick a region that is relatively in the middle between these two regions where our S3 buckets are created.
In my case I'm going to change the region to APSALV1 which is the Mumbai region.
And I'm going to follow exactly the same process and move to Cloud Shell.
I'm going to generate a new file so in this case it's test3.file.
Now before we upload this we're going to go to the S3 console, go to buckets and just make sure that we have a tab open for both the Canada and the Sydney bucket.
This test is going to be uploaded from a region that's in the middle of these two buckets and so we need to be able to quickly check which bucket receives the upload directly and which bucket receives the replicated copy.
So once you've got a tab open for both those buckets, go back to Cloud Shell and run this command.
So aws space S3 space CP, space test3.file, space S3//// and then the name of the S3 multi-region access point.
Once you've done that go ahead and run that command, it will take a few moments to upload and then move straight to the tabs that you've got open to the S3 buckets and refresh both of them.
If I look at Sydney it looks as though that receives the file straight away.
So this multi-region access point is directed us to Sydney.
If I move to Canada and hit refresh we can see that the file's not yet arrived so this is receiving the replicated copy.
And it does that after a few minutes we can see test3.file arrives in this bucket.
Now this one we're going to run a test to show the type of problem that can occur if you're using this type of global replicated configuration using multi-region access points.
What I want you to do is to open up two different Cloud Shells.
I want one Cloud Shell open in the Canada region so the region may have one of your buckets and I want the other Cloud Shell open in the AP Southeast two region, so the Sydney region.
So we've got one Cloud Shell open in the same region as each of our buckets.
Now once we've done that in one region, in my case I'm going to use the Canada region, I'm going to generate a file called test4.file.
The command is going to be the same as we've used in previous steps.
Then I'm going to copy the ARN of the multi-region access point into my clipboard and I'm going to type this command again in the Canada Cloud Shell.
So this command is going to copy this file to the multi-region access point.
Now because I'm doing this in the Canada region, it's almost guaranteed that the multi-region access point is going to direct us to the bucket which is also in the Canada region.
And so the bucket in this region is going to get the direct copy of the object and the bucket in the Sydney region is going to receive the replicated copy.
So I'm going to fully type out this command and before I run it I'm going to move it to the Sydney region and type out a slightly different command.
This command is also contained in the instructions attached to this lesson.
This command is AWS space S3 space CP space and then S3 colon forward slash forward slash and then the name of the multi-region access point and then forward slash test4.file and then space and then a period.
Now this command when we run it which we're not going to do yet is going to copy the test4.file object from the multi-region access point into our Cloud Shell.
Remember we haven't created this object yet.
Now because this Cloud Shell is in the Sydney region, the multi-region access point is almost certainly going to redirect us to the bucket in the Sydney region.
So let's move back to the Canada Cloud Shell, run this command which is probably going to copy this object into the Canada bucket, then move back to the Cloud Shell in the Sydney region and then run this command to copy the object from the multi-region access point into our Cloud Shell and we receive this error.
Now we're getting this error because the object test4.file doesn't exist within the bucket that the multi-region access point is directing us to.
So just to reiterate what we've done, we've created this object in the Canada region using the multi-region access point which is going to have created the object in the closest S3 resource which is the bucket also in the Canada region.
So the Canada region bucket has the direct copy of the object.
This will then take a few minutes to replicate to the Sydney bucket because we're using this reverse copy command in the Sydney region.
It's attempting to copy the test4.file object from the multi-region access point to our Cloud Shell but because this replication won't have occurred yet and because this is going to direct us at the bucket in the Sydney region, we get this object does not exist error.
And this is one of the issues you can experience when you're using multi-region access points in that there's a consistency lack.
You have to wait for the replication to occur before you can retrieve replicated objects from different buckets using the multi-region access point.
Now this process can be improved by using the RTC option when setting up replication but this does come with additional costs.
So this is something to keep in mind when you're using this type of architecture.
And if we keep running this command we'll see that it keeps failing over and over again until the replication process finishes and then we can copy down the object.
Now that is everything which I wanted to cover in this brief mini project in the multi-region access points.
At this point all that remains is for us to clean up the account and return it to the same state as it was at the start of this mini project.
So to do that we need to move back to the S3 console, go to multi-region access points, select the access point you created for this mini project and click delete.
And then you need to copy and paste the name and click delete to confirm.
Then we need to go ahead and move to buckets.
Select each of the buckets in turn, first click on empty and you need to copy and paste or type permanently delete and then click to confirm that empty process.
Once that's finished click on exit, select the same bucket again, this time click delete, copy and paste or type the bucket name and then click to confirm.
And then do the same process with the other buckets so first empty it, confirm that and then delete it and confirm that.
And then at that point all the billable elements created as part of this mini project have been deleted and we go to finish off this mini project.
So that's everything I wanted to cover, I hope it's been useful and I hope it's given you some practical experience of how to use multi-region access points together with S3 replication.
At this point that's everything to go ahead and complete this video and I hope you'll join me soon for another exciting mini project.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this brief demo lesson, you're going to get some experience working with S3 pre-signed URLs.
Now, as you learned in the theory lesson, a pre-signed URL is a type of URL which can be used to grant access to certain objects within an S3 bucket where the credentials for accessing that object are encoded on the URL.
I want to explain exactly what that means and how you can use it in the real world.
Now, to do that, we're not going to need to create any infrastructure using CloudFormation.
Instead, we're going to do it manually.
So first, I want you to make sure that you're logged in to the general AWS account as the I am admin user.
And as always, please make sure that you have the Northern Virginia region selected.
Assuming that's all good, go ahead and type S3 in the search box at the top and open that in a new tab.
We're going to create an S3 bucket within the general AWS account.
So go ahead and click on create bucket to be in that process.
Now, I want you to call the bucket animals for life media.
And because of the unique naming requirements of S3 buckets, you'll need to add some randomness onto the end of this name.
We'll need to select us - east - one for the region.
We can scroll past the bucket settings for block public access.
We don't need to enable bucket versioning.
We won't be using any form of encryption.
Just go ahead and click on create bucket.
Now I want you to go inside the bucket that you've created and click on the upload button to upload an object.
Now, at this point, we need to upload an image file to this bucket.
Any image file will do.
But I've included a sample one attached to this lesson if you don't have one available.
So click the link for the image download.
That will download an image called all5.jpeg.
Once that's downloaded to your local machine, click on add files, select the file, open it, and then go ahead and upload the file to the S3 bucket.
Once that's finished, you can go ahead and click on close.
And you'll see now that we've got our S3 bucket and one object uploaded to that bucket called all5.jpeg.
Next, go ahead and click on that object.
Now, I want to demonstrate how you can interact with this object in a number of different ways.
And the detail really matters here.
So we really need to be sure of exactly what differences there are between these different methods.
The first thing I want you to do is towards the top right of the screen, click on the open button.
You might get a pop-up notification if you do just allow pop-ups.
And what that's going to do is open a jpeg object in a new tab.
Now, I want to point out a number of really important points about how this object has been opened.
If you take a moment to review the URL that's been used to open this object, you'll note that a number of pieces of information have been specified on the URL, including AMZ-SecurityToken.
So essentially, a form of authentication has been provided on the URL which allows you to access this object.
Now, I want you to contrast this by what happens if we go back to the S3 bucket and just copy down the object URL into our clipboard and note how this does not contain any additional authentication information.
It's just the raw URL for this object.
So copy that into your clipboard and then open a new tab and paste that in.
What you're going to see is an access denied message.
And this makes sense because we're now attempting to access this object as an unauthenticated identity, just like any internet user, anyone browsing to this bucket would be doing.
We're not providing any authentication.
And so the only way that we can access the object in this way by not providing any authentication is if we made the bucket public and the bucket currently isn't public, which is why we're getting this access denied message.
Just to reiterate, we can access it by using the previous method because by opening it from within the console, the console is intelligent enough to add authentication information onto the URL which allows us to access this object.
So those are the differences between these two methods.
One is providing authentication and the other isn't.
So right now the only entity that is able to access any of the objects within this bucket is this AWS account and specifically the I am admin user.
The object has no public access.
So now what we're going to do is we're going to work with the scenario that you want to grant access to this object to somebody else for a limited amount of time.
So you don't want to provide the URL that includes authentication information, you want to provide a URL which allows access to that object for a limited amount of time.
That's important.
So we're going to generate a time limited pre-sign URL.
So go ahead and click on the Cloud Shell icon and this is going to open a Cloud Shell using the identity that you're currently logged in at.
So it's going to open a shell much like the one you'll see when you're connected to an EC2 instance but the credentials that you have in the shell are going to be the credentials of the identity that you're currently logged into AWS using in our case the I am admin user with a general AWS account.
You'll see a message saying preparing your terminal and then you'll be logged in to what looks like an EC2 instance prompt.
You can use the AWS CLI tool so I'm able to run an AWS space S3 space LS and this shell will interact with your current AWS account using your current credentials.
So in my case I'm able to see the Animals for Life media bucket which is in my general AWS account because I'm currently logged into this Cloud Shell using the credentials of my I am admin user.
Now to generate a pre-sign URL we have to use this command.
So use AWS to use the command line tools and then a space S3 because we're using the S3 service and then a space and then the word pre-sign because we want to generate a pre-signed URL and then a space and then we need the S3 URI to this object.
Now we can get that from the S3 console.
For every object you can see this unique URI so go ahead and copy this into your click mode go back to the Cloud Shell paste it in and then we'll need space and then we'll use double hyphen expires hyphen in and then a space and then we need to provide the number of seconds that this pre-signed URL will be valid for.
In this case we're going to use 180 which is a total of three minutes so this is in seconds so three minutes is 180 seconds.
So go ahead and press enter and this will generate you a unique pre-signed URL.
So go ahead and copy this into your click board and it is a really long URL so make sure that you get everything including HTTPS all the way to the end of this URL.
Copy that into your click board and then I want you to open a new tab and load that URL and there you go you can see this object loads up using this pre-signed URL.
Now this URL is valid only for 180 seconds.
To demonstrate that I'm going to skip ahead 180 seconds and demonstrate exactly what happens when this URL expires.
After 180 seconds when a next refresh we see this access denied page with a message request has expired.
So you can see how pre-signed URLs are a really effective way of granting access to objects within an S3 bucket for a limited amount of time.
Now there are a number of really interesting aspects to pre-signed URLs that you really need to understand as an architect, a developer or an engineer and I want to go through these really interesting aspects of pre-signed URLs before we finish up with this demo lesson.
Now first just to make everything easier to see I'm going to close down any of these tabs that we got open to this all five object.
I'm going to go back to the cloud shell and I'm going to generate a new pre-signed URL but this time I'm going to use a much larger expires in time.
So I'm going to press the up arrow to return to the previous command.
I'm going to delete this 180 second expiring time and instead I'm going to use 604,800 which is a pretty high number but this is something that we can pick so that we won't have any unintentional expires at the URL as part of this demo lesson.
So pick something crazily large just to make sure that it doesn't expire until we're ready.
So we're generating another URL and so this is an additional unique pre-signed URL.
So I'm going to select all of this URL and you need to do the same go ahead and copy that into your clipboard and open it in a new tab.
Now we can see that that pre-signed URL has opened.
Keep in mind that we've generated this using the identity that we're currently logged into AWS using.
So next what I'm going to do is move back to the AWS console and I'm going to click on the services drop down and move across to IAM.
So I'm going to open IAM up in a new tab.
I'm going to select the users option, select the IAM admin user, I'm going to add an inline policy to this user, select JSON.
Now attached to this lesson is another link for a file called deny s3.json.
Go ahead and click that link and then you'll need to get the contents of the file.
So copy all the contents of the file into your clipboard and then select all of this JSON and paste in the contents of that file and this is an explicit deny policy which denies our user so IAM admin any access to s3.
So this essentially prevents our IAM admin user any level of access to s3.
Go ahead and click on review policy for name, call it deny s3 and click on create policy and this will attach this as an inline policy to our IAM admin user.
Now I'm going to clear the screen within this cloud shell to make it easier to see and then I'm going to run an AWS space s3 space ls and press enter.
Now we get an access deny when accessing the s3 service because we just added an explicit deny onto the IAM admin user and remember the rule that applies to permissions deny allow deny and explicit deny always overrules everything else and so even though the IAM admin user has administrative permissions by applying this deny s3 policy which has an s3 explicit deny this wins so currently we have no access to s3.
Now let's return to the tab where we have this pre-signed url open.
So remember this pre-signed url was generated at the time when we had access to s3 so now let's refresh this page and now we get an access denied message and this is one of those interesting aspects of pre-signed urls.
When you generate a pre-signed url using an identity such as an IAM user that url has your access permissions so you generate a pre-signed url and anyone using that pre-signed url for the duration that it's active will be interacting with that one object on s3 as though they were using your identity.
Now if you adjust the permissions on your identity as we've just done by denying access to s3 it means that that pre-signed url will also have that deny s3 permissions and so this pre-signed url now no longer has any access to anything in s3 including the object that it was configured to provide access to so now let's go back to our cloud shell and what we're going to do now remember we are still denied access to s3.
Let's press the app arrow and move back to the command which we used to generate this pre-signed url.
Now let's regenerate another pre-signed url.
Now note how even though we are denied access to any part of s3 we can generate a pre-signed url which points at this one specific object all five dot jpeg so we're not prevented from generating a pre-signed url for something that we have no access to.
Now if we copy this into our clipboard move to the tab that we already have open and just replace this url with the pre-signed url that you just generated.
Note that we're still denied we're still denied because although we could generate this pre-signed url we've generated it using an identity which has no access to s3 so the next interesting fact to pre-signed urls that I want to demonstrate if we go back to the iam console and now we remove this deny s3 policy from our iam admin user now our iam admin user once again has access to s3 so if we go back to the cloud shell we can run natal us space s3 space ls and press enter and now we can access s3 again from this cloud shell remember the cloud shell is using commissions based on our iam admin user now let's go back to the pre-signed url and now if we refresh this now we can access this object again so I'm doing this to illustrate that when you generate a pre-signed url that pre-signed url is linked to the identity that generates it whatever the permissions are of that identity when the pre-signed url is used that is the permissions that that pre-signed url has so for the duration it always has the same permissions that the identity which generated it has at that very moment so that's something that you need to keep in mind now more interestingly is that you can actually generate a pre-signed url for a non-existent object there's nothing preventing that from occurring so if we go back to the cloud shell we press the up arrow and a number of times to bring up this pre-signed url command and this time we try to generate a pre-signed url for an object which doesn't exist so if I change this all 5 to all 1 3 3 7 and press enter it will generate a pre-signed url that pre-signed url will be valid to access an object called all 1 3 3 7 .jpeg inside this bucket but because there's no such object in that bucket if I try to use it I won't be able to do so so if I open that new invalid pre-signed url I'll get the message that the specified key does not exist but we can generate pre-signed urls for non-existent objects now one more interesting thing about pre-signed urls which I'm not going to demonstrate is if you generate a pre-signed url using temporary credentials you get by issuing a roll so for example if we logged into an ec2 instance which had an instance roll on that instance and then we generated a pre-signed url even if we set a huge expiry time so 604800 that pre-signed url would stop working when those temporary credentials for that roll also stopped working now it is possible to generate a pre-signed url from the console UI this is a relatively recent change from the object if you click on the object actions drop down you can click share with the pre-signed url you have to set the same settings so what you want the expiry to be in this particular case let's say 60 minutes and then I can go ahead and click on create pre-signed url now that's automatically copied into my clipboard and I can go ahead and move to a different tab paste that in and we can open the console UI generated pre-signed url so that's just an alternative way of doing the same process that we just used the cloud shell for now that's everything that I wanted to demonstrate in this demo lesson about pre-signed url and don't worry we're going to be talking about these more later in the course as well as looking at some alternatives what we need to do to tidy up this lesson is go back to the AWS console move to the S3 console and then just go ahead and empty and delete the bucket that you created so select animals for live media click on empty type or paste in permanently delete and then confirm or once that's successfully empty click on exit the bucket should still be selected and then go ahead and click on delete delete confirm that with the name of the bucket and then go ahead and delete the bucket and at this point we've cleaned up the account and the resources back in the same state as they were at the start of the lesson so I hope you've enjoyed this brief demo lesson go ahead complete the video and when you're ready I look forward to you joining me in the next video.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk to you about S3 pre-signed URLs.
Pre-signed URLs are a way that you can give another person or application access to an object inside an S3 bucket using your credentials in a safe and secure way.
Let's have a look at how this works architecturally.
To illustrate how pre-signed URLs work, let's use this example architecture, an S3 bucket which doesn't have any public access configured.
So it's still in a default private configuration.
This means that in order to access the bucket or any resources in it an IAM user such as IAM admin would have to authenticate to AWS and be authorized to access the resource.
IAM admin would send credentials along with the access request, AWS would validate them at the time that the request is made and only then grant access to the object in S3.
Now one issue that we have is that because the bucket is private only authenticated users are able to access it.
Our masked man here has no way of providing authentication information to AWS because he doesn't have any and so any request that's unauthenticated would fail.
Now if giving the mystery user access to S3 is an essential requirement for the business then there are three common solutions at this point and none of them are ideal.
Number one is to give the mystery user an AWS identity.
Number two is to give the mystery user some AWS credentials to use or number three is to make the bucket or the object public.
Now none of these are ideal.
If the user only needs short term access to the bucket and objects the effort of supplying an identity seems excessive.
Just giving some credentials to the user appears on the surface to be a security risk and definitely if it's somebody else's credentials that's just bad practice.
Making the bucket public for anything which has sensitive data in it also appears to be less than ideal.
So one solution that AWS offers is to use pre-signed URLs and let's look at how this would work with an architecture example.
IAM admin is an AWS identity with some access rights granted via a permissions policy.
So IAM admin can make the request to S3 to generate a pre-signed URL.
She would need to provide her security credentials, specify a bucket name, an object key and an expiry date and time as well as indicate how the object would be accessed.
And S3 will create a pre-signed URL and return it.
This URL will have encoded inside it the details that IAM admin provided.
So which bucket is for which object is for it will be encoded with the fact that the IAM admin user generated it and it will be configured to expire at a certain date and time as requested by the IAM admin user.
The URL could then be passed to our mystery user and he or she could use it to access a specific object in the specific S3 bucket up until the point at which it expires.
When the pre-signed URL is used the holder of that URL is actually interacting with S3 as the person who generated it.
So for that specific object in that specific bucket until the timer expires our masked man is actually IAM admin.
Pre-signed URLs can be used for both downloads from S3 so get operations or uploads to S3 known as put operations.
So this type of architecture might be useful for the animals for life remote workers if they don't have access to AWS accounts or accessing from a secure location and just need to upload one specific object.
Now there's another type of architecture which commonly uses pre-signed URLs and I've mentioned this earlier in the course when I was talking about some benefits of S3.
I want you to think about a traditional application architecture.
On the left we have an application user with a laptop.
We've got an application server in the public cloud in the middle which hosts the application and for example say let's say this is a video processing application for wildlife videos that the animals for life organization manages.
We've learned that one of the real strengths of S3 is its ability to host the large media files and so the large wildlife video files have been migrated from the application server to a media S3 bucket.
But by doing this we've introduced a problem.
Previously the videos were hosted on the application server and it could control access to the video files.
If we host them on an S3 bucket then either every user needs an AWS identity so an IAM user to access the videos or we need to make the videos public so that the user running the web application can download them into her browser and neither of those are ideal.
Luckily pre-signed URLs offer a solution.
With pre-signed URLs we can keep the bucket private.
Then we can create an IAM user in our AWS account for this application.
Remember when I talked about choosing between an IAM user or a role I said that if you could visualize how many of a certain thing that would be using an identity then it's likely to suit an IAM user.
Well in this case we have one application and application service accounts are a common scenario where IAM users are used.
So for this example we create an IAM user for the application.
So when our application user interacts with a web application she makes a request and that's step one.
The request might be for a page of information on a bushfire which is currently happening in Australia which has a video file associated with it.
The application that's running on the server knows that it can directly return the information that the request is asking for but that the video is hosted on the private S3 bucket.
So it initiates a request to S3 asking for it to generate a pre-signed URL for that particular video object using the permissions of the IAM user that the application is using.
So IAM app one.
The S3 service then creates a URL which has encoded within it the authentication information for the IAM app one user.
The access to one object in the bucket for a short time limited basis maybe two to three hours.
And the S3 service then returns that to the application server and through to the end user.
The web application that's running on the user's laptop then uses this pre-signed URL to securely access the particular object that's stored on the media bucket.
Pre-signed URLs are often used when you offload media into S3 or as part of serverless architectures where access to a private S3 bucket needs to be controlled and you don't want to run thick application servers to broke at that access.
And we'll look at serverless architectures later in the course.
For now I just want to be sure that you understand what pre-signed URLs do and in the next lesson which is a demo lesson you're going to get the chance to experiment yourself and generate a pre-signed URL.
Pre-signed URLs can be used to access an object in a private S3 bucket with the access rights of the identity which generates them.
They're time limited and they encode all of the authentication information needed inside.
And they can be used to upload objects and download objects from S3.
Now I want to show you this in a demo because it's far easier to actually do it.
But before we do that I want to step through some exam power-ups.
There are a couple of really interesting facts about generating pre-signed URLs which might help you out in some exam questions.
Now first and this is a fairly odd behavior but you can create a pre-signed URL for an object that you have no access to.
The only requirement for generating a pre-signed URL is that you specify a particular object and an expiry date and time.
If you don't have access to that object you can still generate a pre-signed URL which also because it's linked to you will have no access to that object.
So there aren't many use cases where this is applicable but you do need to be aware that it is possible to generate a pre-signed URL when you don't have any access.
Now when you're using the URL, so when you utilize a URL and attempt to access an object, the permissions that you have access to match the identity that generated it.
And it's important to understand that it matches the permissions that the identity has right now.
So at the point when you use the URL, the URL has the same permissions as the identity that generated it has right now.
So if you get an access denied error when you attempt to use a pre-signed URL to access an object, it could be that the identity that generated that URL never had access or it could be that it simply doesn't have access right now.
And they're two very important nuances to understand about pre-signed URLs.
So when you're using a URL to access an object, it matches the current permissions of the identity that generated that URL.
Now that's fairly okay for an IM user.
As I demonstrated on the previous example of the application, generally you would create an IM user for the application and this IM user would have fairly static permissions.
So the permissions generally wouldn't change between when you created the URL and when your customer is using that URL within an application.
But don't generate pre-signed URLs based on an IM role.
You can in theory assume an IM role, which remember gives you temporary credentials, and then you can use those temporary credentials to generate a pre-signed URL.
A pre-signed URL can have a much longer validity period than those temporary credentials.
So those temporary credentials will generally expire well before a pre-signed URL does.
So if you generate a pre-signed URL using a role and then those temporary credentials expire, those credentials are no longer valid.
And so that URL will stop working.
And so it's almost never a good idea to generate a pre-signed URL using an IM role.
You should always use long-term identities.
So generally an IM user.
With that being said though, that is everything that I wanted to cover in this theory lesson.
So go ahead, complete the video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson you're going to configure S3 cross-region replication.
So that's a replication of objects from one bucket to another bucket in different AWS regions.
And the scenario that we're going to be stepping through is where you as a systems engineer are looking to configure replication to allow disaster recovery of an S3 static website from one region to another AWS region.
So the first thing that I'll need you to do is to make sure that you're logged in to the general AWS account and you'll need to have the Northern Virginia region selected.
Assuming that's the case then go ahead and move to the S3 console because we're going to be creating our source and destination buckets.
Now if you see any notifications about updates to the UI or any additional functionality then just go ahead and close that down.
What we want you to do is to go ahead and click on create bucket.
Now we want to keep these names simple so that we can distinguish between source and destination bucket.
So for the source bucket we're going to start with source bucket and then we want you to use your initials in my case AC and then we want you to put a random number at the end and I'm going to attempt to use 1337.
For region for the source bucket let's use US - East - 1 and then scroll down past the block public access settings scroll down past bucket versioning and default encryption and then just create the bucket.
In my case it successfully created a bucket using this name and again because of the globally unique naming requirements of S3 buckets you need to make sure that you pick something different than me and different from other students.
Now the scenario is that we're using this bucket to host a static S3 website so we need to enable that functionality.
So go into source bucket, click on properties, scroll down all the way to the bottom and click on edit next to static website hosting and we need to enable static website hosting and for hosting type just make sure that you select host a static website.
Now we're going to use index.html for both the index document and for the error document.
So enter index.html in both of those boxes and once you've done that you can save those changes.
Now so that this bucket can host a static website we need to go to permissions and then we need to edit the block public access settings so click on edit and uncheck block all public access and once you've done so click on save changes.
You need to confirm that so follow the instructions and then confirm and this means that the bucket can be made public but in order to make it public we need to add a bucket policy.
So to edit the bucket policy scroll down below the block public access settings and then edit the bucket policy.
Now attached to this lesson is a demo files link.
I'll need you to click that link which will download a zip file.
Go ahead and extract that zip file which will create a folder and go inside that folder.
Containment in this folder are all the files which you'll need for this demo and inside that folder is a file called bucket_policy.json.
Go ahead and open that file so this is the file bucket_policy.
This is a bucket policy which allows any principal to use the star tile card to use the S3 get object action on this particular ARN.
Now this is a placeholder we need to update this placeholder so that it references any objects within our source bucket so I want you to copy this entire bucket policy into your clipboard, move back to the console, paste it in and once you've pasted it in go ahead and copy the bucket ARN for this bucket into your clipboard by clicking on this icon and then I want you to select the placeholder that you've just pasted in so straight after the first speech mark all the way up to before the forward slash so select this component of this placeholder ARN and paste it in the source bucket ARN so it should look like this.
It references ARN colon, ALUS colon, S3 and then colon, colon, colon and then the source bucket so whatever you've called your source bucket it should reference this so don't use what's on my screen, use your specific source bucket ARN and then you should have a forward slash star on the end.
That means that any anonymous or unauthenticated identity will be able to get any objects within this S3 bucket and that's what we want, we want this to be a public bucket.
So click on save changes to commit those changes and now this bucket is public, you get the warning under permissions overview that this bucket is public and under the bucket name at the top it will say publicly accessible.
Now once you've done that go back to the S3 console and we're going to create the destination bucket, we're going to follow exactly the same process so click on create bucket, this time we'll call it destination bucket and then again your initials and then ideally the same random number so in my case 1-3-3-7.
Go ahead and click on the region drop down and instead of picking US-EAST-1 this time we're going to use US-WEST-1 Now to save us some time we're going to uncheck this block all public access while we're creating the bucket so that we don't need to do that afterwards and we'll need to acknowledge that by checking this box, scroll all the way down to the bottom and then click on create bucket, then we're going to go into destination bucket, select properties, move down to the bottom and we'll need to enable static website hosting so click edit, enable, choose the option to host a static website and then just like before we're going to use index.html for both the index document and the error document so enter index.html in both of those boxes and then save those changes.
Then we'll need to go to permissions and edit the bucket policy so scroll down, click on edit for bucket policy you'll need to copy this template bucket policy into your clipboard, paste it into the policy box and again we need to replace this placeholder so copy the destination bucket ARN into your clipboard, select from after the speech mark through to before the forward slash paste that in and that will now reference any objects in the destination bucket and go ahead and click on save changes so now at this point we have the source and destination bucket, both of them are public, both of them are set to be static websites and neither of them have any objects inside that bucket.
The next step is to enable cross region replication from the source bucket through to the destination bucket so to do that click on the source bucket, click on the management tab, scroll down and we need to configure a replication rule so click on create replication rule, we're told that replication requires versioning to be enabled for the source bucket and we're given a convenient button to click to enable versioning on this bucket and that's what we're going to do so click on enable bucket versioning and this means this bucket can now be the source of this replication rule for replication rule name, I'm going to call it static website dr for disaster recovery we want it to be enabled straight away so make sure the status is set to enabled now we can limit this replication rule to have a certain scope, we can filter it based on prefix or tags and this allows us to replicate only part of this bucket, in our case we want to replicate the entire bucket so we're going to select the option that this rule applies to all objects in the bucket so check this option now that we've configured the source, so we've configured the bucket name, the source region and chosen the scope of this replication rule now we need to decide on the destination, now for the destination we can use a bucket within the same AWS account or we can specify a bucket within another AWS account, if we choose another AWS account then we need to worry about object ownership we need to make sure that the objects which are replicated into the destination account have the correct ownership because by default they will be owned by our account because we will be creating them we can specify as part of the replication rule that the destination account owns those objects but because we're using the same account that doesn't apply in this scenario so we're going to choose a bucket within this account, click on browse s3 and we're going to select the destination bucket that we just created so select the destination bucket that you created in the previous step and then click on choose path you're going to be informed that as well as replication requiring versioning on the source bucket it also is required on the destination bucket and again you're presented with a convenient button to enable versioning on this destination bucket so click on enable bucket versioning, just to confirm versioning is required on both the source and destination bucket when you're using s3 replication so that's the destination configured, we've picked the destination bucket and enabled versioning, so scroll down next we need to give this replication rule the permissions that it needs to interact with different AWS resources so this rule needs the ability to read from the source bucket and write those objects into the destination bucket so we need to give this replication rule an IAM role which provides those permissions so click in the choose IAM role drop down and select create new role you could select an existing one if you already had one which was preconfigured but for this demo lesson we've done so we're going to create a new role now you can enable the ability of s3 replication to support objects which are encrypted with AWS KMS you do have to select that option explicitly though when you're creating a replication rule and in our case we're not going to be replicating any encrypted objects using sse-kms so we can leave this unchecked I've talked about this in the theory lesson but you can also change the storage class as part of the replication process so generally you're replicating objects from a primary location or a source bucket through to a destination bucket and this is often part of disaster recovery scenarios where you often want cheaper storage at the destination end and so the default is to maintain the storage class that's used in the source bucket but you can override that and change the storage class which is used when storing those replicated objects in the destination bucket now we're not going to set that in this demonstration but you do have the option to do so now these are all relatively new features of s3 replication so s3 replication has been a feature available at AWS for some time but over time they've evolved and enhanced the feature sets available to the product so you're able to set replication time control or RTC and this imposes an SLA on the replication process so by enabling it you ensure that 99.99% of new objects are replicated within 15 minutes and this feature also provides access to replication metrics and notifications but do note that this does come at an additional cost because s3 versioning is used on both the source and destination buckets if you do delete an object in a source bucket then by default that deletion is not replicated through to the destination so by default s3 replication does not replicate delete markers and I've talked about delete markers elsewhere in the course by selecting this option you do replicate delete markers and that means that deletions are replicated from the source through to the destination for now though we're not going to select any of these just go ahead and click on save after a few moments you'll be presented with this option so the default s3 replication historically didn't replicate any objects which existed in a bucket before you enabled replication you're now offered the ability to replicate existing objects within a bucket so if you wanted to do this if you had a bucket with objects that already existed then you could replicate those as part of starting this process now our bucket is empty so we won't do this anyway so go ahead and select no do not replicate existing objects and click submit and at this point replication is enabled between our source bucket and our destination bucket so let's give it a try what we're going to do is click at the top here next to the source bucket name and this will take us to the top level of the source bucket and we're going to upload some objects so go ahead and click on upload and then click on add files at this point go ahead and locate the folder which you downloaded and extracted earlier in this demo lesson so the demo files folder so go inside that folder and inside here as well as the bucket policy template which you used earlier you'll see two folders website and website 2 I want you to expand the website and select both the aotm.jpg object and the index.html object select both of those and make sure you use the ones from the website folder not the website 2 folder and then click on open once they're selected for upload you need to scroll down to the bottom and click upload and now upload both of those objects to our source bucket once that's complete we can close down that dialog then I want you to click on properties we're going to move down to the bottom and we're going to copy the bucket website endpoint into our clipboard and then open that in a new tab so this is the source bucket website endpoint and this opens animalsforlife.org animal.monthformarch and this is a picture of my cat Winky so let's go back to the S3 console move to the top go back to the main S3 console and then move in to the destination bucket now don't be concerned if you see something slightly different at this point the timetaping for replicating objects from source to destination can vary wildly if you don't enable the replication SLA option in our case because these objects are relatively small they should be replicated fairly quickly now at this point I want you to go ahead and pause this video and wait until you see two objects in your destination bucket don't be alarmed if these objects take five or ten minutes to appear just wait pause the video wait for those objects to appear and then you're good to continue you'll see that we have both of the objects which we added to the source bucket they're both now stored in the destination bucket and that's because they've been replicated using S3 replication what we can do is go to properties, move all the way down to the bottom copy the destination bucket website endpoint into our clipboard and open that in a new tab so this is the destination static website endpoint and again we see the same website animalsforlife.org animal.monthformarch again my cat Winky so next we're going to move back to the main S3 console so click on Amazon S3 and then move back to the source bucket click on upload again and we're going to replace the objects that we just added so click on add files and then expand the website to folder and go ahead and upload both of these objects so we have the same name O T N dot JPEG and index dot HTML go ahead and upload both of these and make sure you're uploading to the source bucket so upload those you'll need to confirm that that'll take a few moments to complete once you see that the upload was successful you can close down this dialogue and then you should still have the tab open to the source bucket if you go to that tab and refresh you'll see now that we have animalsforlife.org animal.monthforapril and this is my cat Truffles if we go immediately to the destination bucket and hit refresh on this bucket you might still see the picture of Winky if the image doesn't change and you keep refreshing it just give it a few minutes because without the replication SLA then the time taken to replicate from source to destination can vary significantly in my case it took about three or four minutes of refreshers but now we see that the destination bucket has been updated with the new image again this is a picture of my cat Truffles so this has been a very simple example of cross region replication at this point I've got everything that I wanted to do in this demo lesson so all we need to do is to clean up the account and return it to the same state as it was at the start of the demo so close down both of these tabs and just return to the main S3 console select the destination bucket and click empty confirm this by following the instructions and then confirm click exit with the bucket still selected, delete the bucket again you need to follow the instructions to confirm deletion of this bucket so go ahead and do that and then we need to follow the same process for the source bucket so select it and empty it, confirm that, close down this dialogue and then delete the bucket and again you need to confirm that, follow those instructions and delete bucket then we need to click on services, move to the IAM console because remember S3 replication created an IAM role that was used before that replication so click on roles, locate the role which starts with S3 C R R and it will have the name of the source bucket as part of that role name so go ahead and select the correct role, make sure that you select the one which has the name of your source bucket and then delete that role and you need to confirm that process and that will delete the role and at this point that's the account back in the same state as it was at the start of this demo lesson so I hope you've enjoyed this demo and a simple example of how to use S3 replication specifically in this case cross region replication from a source to a destination bucket at this point that's everything that I wanted to do, I'll be talking about S3 replication elsewhere in the course in other associate courses and at the professional level so don't worry, you'll get plenty of opportunities to explore this specific ease of functionality available as part of S3 but at this point that's everything I wanted to do in this demo lesson go ahead, complete the video and when you're ready I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to talk about S3 replication, the feature which allows you to configure the replication of objects between a source and destination S3 bucket.
Now there are two types of replication supported by S3.
The first type, which has been available for some time, is cross-region replication or CRR, and that allows the replication of objects from source buckets to one or more destination buckets in different AWS regions.
The second type of replication announced more recently is same-region replication or SRR, which as the name suggests is the same process, but where both the source and destination buckets are in the same AWS region.
Now the architecture for both types of replication is pretty simple to understand once you've seen it visually.
It only differs depending on whether the buckets are in the same AWS accounts or different AWS accounts.
Both types of replication support both, so the buckets could be in the same or different AWS accounts.
In both cases, replication configuration is applied to the source bucket.
The replication configuration configures S3 to replicate from the source bucket to a destination bucket, and it specifies a few important things.
The first is logically the destination bucket to use as part of that replication, and another thing that's configured in the replication configuration is an IAM role to use for the replication process.
The role is configured to allow the S3 service to assume it, so that's defined in its trust policy.
The role's permissions policy gives it the permission to read objects on the source bucket and permissions to replicate those objects to the destination bucket.
And this is how replication is configured between source and destination buckets, and of course that replication is encrypted.
Now the configuration does define a few other items, but I'll talk about them on the next screen for now, and just focusing on this basic architecture.
There is one crucial difference between replication which occurs in the same AWS accounts versus different AWS accounts.
Inside one account, both S3 buckets are owned by the same AWS account, so they both trust that same AWS account that they're in.
That means that they both trust IAM as a service, which means that they both trust the IAM role.
For the same account, that means that the IAM role automatically has access to the source and the destination buckets as long as the role's permission policy grants the access.
If you're configuring replication between different AWS accounts, though, that's nothing off.
The destination bucket, because it's in a different AWS account, doesn't trust the source account or the role that's used to replicate the bucket contents.
So in different accounts, remember that the role that's configured to perform the replication isn't by default trusted by the destination account because it's a separate AWS account.
So if you're configuring this replication between different accounts, there's also a requirement to add a bucket policy on the destination bucket, which allows the role in the source account to replicate objects into it.
So you're using a bucket policy, which is a resource policy, to define the role in a separate account can write or replicate objects into that bucket.
Once this configuration is applied, so either the top configuration is the same account or the bottom, if it's different accounts, then S3 can perform the replication.
Now let's quickly review some of the options available for replication configuration, so that might actually come in handy for you to know.
The first important option is what to replicate.
The default is to replicate an entire source bucket to a destination bucket, so all objects, all prefixes, and all packs.
You can, though, choose a subset of objects, so you can create a rule that has a filter, and the filter can filter objects by prefix or tags or a combination of both, and that can define exactly what objects are replicated from the source to the destination.
You can also select which storage class the objects in the destination bucket will use.
Now the default is to use the same class, but you can pick a cheaper class if this is going to be a secondary copy of data.
Remember when I talked about the storage classes that are available in S3, I talked about infrequent access or one-zone infrequent access classes, which could be used for secondary data.
So with secondary data, you're able to tolerate a lower level of resiliency, so we could use one-zone infrequent access for the destination bucket objects, and we can do that because we've always got this primary copy in the source bucket, so we can achieve better economies by using a lower cost storage class in the destination.
So remember this is the example, the default is to use the same storage class on the destination as is used on the source, but you can override that in the replication configuration.
Now you can also define the ownership of the objects in the destination bucket.
The default is that they will be owned by the same account as the source bucket.
Now this is fine if both buckets are inside the same account.
That will mean that objects in the destination bucket will be owned by the same as the source bucket, which is the same account, so that's all good.
However, if the buckets are in different accounts, then by default the objects inside the destination bucket will be owned by the source bucket account, and that could mean when you end up in a situation where the destination account can't read those objects because they're owned by a different AWS account.
So with this option you can override that and you can set it so that anything created in the destination bucket is owned by the destination account.
And lastly there's an extra feature that can be enabled called replication time control or RTC, and this is a feature which adds a guaranteed 15-minute replication SLA onto this process.
Now without this, it's a best efforts process, but RTC adds this SLA, it's a guaranteed level of predictability, and it even adds additional monitoring so you can see which objects are queued for replication.
So this is something that you would tend to only use if you've got a really strict set of requirements from your business, and make sure that the destination bucket and source buckets are in sync as closely as possible.
If you don't require this, if this is just performing backups or it's just for a personal project, or if the source and destination buckets aren't required to always be in sync within this 15-minute window, then it's probably not worth adding this feature.
It's something to keep in mind and be aware of for the exam.
If you do see any questions that mention 15 minutes for replication, then you know that you need this replication time control.
Now there are some considerations that you should be aware of, especially for the exam.
These will come up in the exam, so please pay attention and try to remember these points.
The first thing is that by default replication isn't retroactive.
You enable replication on a pair of buckets of source and destination, and only from that point onward are objects replicated from source to destination.
So if you enable replication on a bucket which already has objects, those objects will not be replicated.
And related to this, in order to enable replication on a bucket, both the source and destination bucket need to have versioning enabled.
You'll be allowed to enable versioning as part of the process of enabling replication, but it is a requirement to have it on, so a bucket cannot be enabled for replication without versioning.
Now you can use S3 batch replication to replicate existing objects, but this is something that you need to specifically configure.
If you don't, just remember by default replication is not retroactive.
Secondly, it is a one-way replication process only.
Objects are replicated from the source to the destination.
If you add objects manually in the destination, they will not be replicated back to the source.
This is not a bi-directional replication.
It's one-way only.
Now more recently, AWS did add the feature which allows you to add bi-directional replication, but just be aware that this is an additional setting which you need to configure.
By default, replication is one-way, and enabling bi-directional replication is something you need to specifically configure, so keep that in mind.
Now in terms of what does get replicated from source to destination, replication is capable of handling objects which are unencrypted, so if you don't have any encryption on an object, it's capable of handling objects which are encrypted using S3 and S3, and it's even capable of handling objects which are encrypted using S3 and KMS, but this is an extra piece of configuration that you'll need to enable.
So there's configuration and there's extra permissions which are required, because of course KMS is emolved.
Now more recently, AWS did add the ability to replicate objects encrypted with SSE-C, so that's server-side encryption with custom-managed keys, but this is a relatively recent addition.
Historically, SSE-C was incompatible with cross or same-region replication.
Replication also requires that the owner of the source bucket needs permissions on the objects which will replicate, so in most cases, if you create a bucket in an account and you add those objects, then the owner of the object will be the source account, but if you grant cross-account access to a bucket, if you add a resource policy allowing all the AWS accounts to create objects in a bucket, it's possible that the source bucket account will not own some of those objects, and the style of replication can only replicate objects where the source account owns those objects.
So keep that in mind, and the limitation is it will not replicate system events, so if any changes are made in the source bucket by lifecycle management, they will not be replicated to the destination bucket, so only user events are replicated, and in addition to that, it can't replicate any objects inside a bucket that are using the Belysia or Glacia Deep Archive storage classes.
Now that makes sense, because Glacia and Belysia Deep Archive, while they are shown as being inside an S3 bucket, you need to conceptually think of them as separate storage products, so they cannot be replicated using this process.
And then lastly, it's important to understand that by default, deletes are not replicated between buckets, so the adding of a delete marker, which is how object deletions are handled for a version-enabled bucket, by default, these delete markers are not replicated.
Now you can enable that, but you need to be aware that by default, this isn't enabled.
So one of the important things I need to make sure you're aware of in terms of replication is why you would use replication.
What are some of the scenarios that you'll use replication for?
So for the same region replication specifically, you might use this process for log aggregation, so if you've got multiple different S3 buckets which store logs for different systems, then you could use this to aggregate those logs into a single S3 bucket.
You might want to use the same region replication to configure some sort of synchronization between production and test accounts.
Maybe you want to replicate data from prod to test periodically, or maybe you want to replicate some testing data into your prod account.
This can be configured in either direction, but a very common use case for same region replication is this replication between different AWS accounts, different functions, so prod and test, or different functional teams within your business.
You might want to use same region replication to implement resilience if you have strict sovereignty requirements.
So there are companies in certain sectors which cannot have data leaving a specific AWS region because of sovereignty requirements, so you can have same region replication replicating between different buckets and different accounts, and then you have this account isolation for that data.
So having a separate account with separate logins isolated to make an audit team or a security team replicates it into that account, it provides this account level isolation.
Obviously, if you don't have those sovereignty requirements, then you can use cross region replication and use replication to implement global resilience improvements, so you can have backups of your data copied to different AWS regions to cope with large scale failure.
You can also replicate data into different regions to reduce latency.
So if you have, for example, a web application or your application loads data, then obviously it might be latency sensitive, so you can replicate data from one AWS region to another, so the customers in that remote region can access the bucket that's closest to them, and that reduces latency generally gives them better performance.
Now that is everything I want to cover in this video, so go ahead and complete the video, and when you're ready, I look forward to you joining me in the next.
-
- Aug 2024
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about S3 lifecycle configuration.
Now you can create lifecycle rules on S3 buckets which can automatically transition or expire objects in the bucket.
There are great ways to optimize the cost for larger S3 buckets and so they're an important skill to have as a solutions architect, developer or operational engineer.
Let's jump in and explore exactly what features lifecycle configuration adds to S3 buckets.
A lifecycle configuration at its foundation is a set of rules which apply to a bucket.
These rules consist of actions which apply based on criteria.
So do X if Y is true.
These rules can apply to a whole bucket or they can apply to groups of objects in that bucket defined by prefix or tags.
The actions which are applied are one of two types.
Transition actions which change the storage class of whichever object or objects are affected.
So an example of this is that you could transition objects from S3 standard to say S3 infrequent access after 30 days.
And you do this if you're sure that the objects will be more infrequently accessed after that initial 30 day period.
You could then maybe transition the objects from infrequent access to S3 Glacier Depardive after 90 days.
But you're certain that they will be rarely if ever accessed after that point.
Now the other type of rule are expiration actions which can delete whatever object or object versions are affected.
So you might want to expire objects or versions entirely after a certain time period.
And this can be useful to keep buckets tidy.
Now both of these can work on versions as well if you have a version enabled bucket.
But this can become complex and so it's something that you need to carefully plan before implementing.
Life cycle configurations offer a way to automate the deletion of objects or object versions or change the storage class of objects to optimize costs over time.
And the important thing to understand is that these rules aren't based on access.
So you can't move objects between classes based on how frequently they're accessed.
This is something which is a different feature intelligent tearing does on your behalf.
Now you can combine these S3 life cycle actions to manage an object's complete life cycle.
For example suppose that objects you create have a well defined life cycle.
Initially the objects are frequently accessed over a period of 30 days.
Then objects are infrequently accessed for up to 90 days.
And after that the objects are no longer required.
In this scenario you can create an S3 life cycle rule in which you specify the initial transition action to S3 intelligent tearing, S3 standard IA or even S3 one zone IA storage.
Then you might have another transition action through to one of the glacier storage classes for archiving and then potentially even an expiration action when you're sure you no longer require the object.
As you move the objects from one storage class to another you save on the storage costs.
Now let's look visually at the life cycle process.
So with life cycle configurations specifically the transitions between different storage classes you can think of this as a waterfall.
So we start with all of the storage classes.
So we've got standard infrequent access, intelligent tearing, one zone infrequent access, glacier instant retrieval, glacier flexible retrieval and then glacier deep archive.
And it's important to understand that this transition process like a waterfall flows down.
So glacier flexible retrieval can transition into glacier deep archive.
Glacier instant retrieval can transition into glacier flexible and glacier deep archive.
One zone infrequent access can transition into glacier flexible or glacier deep archive.
Importantly not into glacier instant retrieval.
This is one of the exceptions you need to be aware of.
Intelligent tearing this one can transition into one zone infrequent access, glacier instant, glacier flexible and glacier deep archive.
Then we've got standard infrequent access which can transition into more classes still.
So intelligent tearing, one zone infrequent access, glacier instant, glacier flexible and glacier deep archive.
And then finally we've got S3 standard and this can transition into all of the other classes.
So transitioning flows down like a waterfall but generally you won't go directly.
Most of the life cycle configuration that I've been exposed to operates with multiple different stages as the access patterns of objects change over time.
If you do have any use cases though you can go directly between most of the different classes.
Whichever storage classes you do make use of transition can't happen in an upward direction only down.
And then finally there are some restrictions or considerations that you need to be aware of.
Firstly be careful when transitioning smaller objects from standard through to infrequent access, intelligent tearing or one zone infrequent access.
This is because of the minimums on those classes.
For larger collections of smaller objects you can end up with equal or more costs because of the minimums of these different storage classes.
Additionally this is important to understand for the exam.
There's a 30 day minimum period where an object needs to remain on S3 standard before then moving into infrequent access or one zone infrequent access.
So you can store objects when you upload them directly into standard IA and one zone IA and that's okay.
But if you first store that object in the S3 standard and then you want to transition that into standard IA or one zone IA then the object needs to have been in S3 standard for 30 days before you can life cycle transition that into either of those infrequent access tiers.
That's really important to understand.
You can always directly adjust the storage class of an object for the CLI or console UI.
But when you're using life cycle configuration an object needs to be in S3 standard for 30 days before it can be transitioned into standard infrequent access or one zone infrequent access.
And then finally this one is a little bit more obscure still.
If you want to create a single rule which transitions objects from standard through to infrequent access or one zone infrequent access you have to wait an additional 30 days before then transitioning those objects through to any of the glacier classes.
You can't have a single rule which moves something from S3 standard through to infrequent access classes and then soon in 30 days moves those same objects through to the glacier classes.
With a single rule the object has to be within standard infrequent access or one zone infrequent access for 30 days before that same rule can then move those objects through to any of the glacier classes.
You can have two rules which do that process without that 30 day gap but a single rule there has to be a 30 day period where the object is in the infrequent access tiers before moving into the glacier tiers.
Now depending on the course that you're watching there might be a demo coming up elsewhere in the course where you'll use life cycle rules.
If this doesn't apply to the course that you're on don't be alarmed if that isn't a demo.
In any case that's all I wanted to cover in this theory lesson so go ahead and complete the video and when you're ready we can move on to the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, this is part two of this lesson.
We're going to continue immediately from the end of part one.
So let's get started.
Now let's talk about the first of the Glacier Storage Classes, S3 Glacier Instant Retrieval.
If I had to summarize this storage class, it's like S3 standard in frequent access, except it offers cheaper storage, more expensive retrieval costs, and longer minimums.
Standard IA is designed for when you need data instantly, but not very often, say once a month.
Glacier Instant Retrieval extends this, so data where you still want instant retrieval, but where you might only access it say once every quarter.
In line with this, it has a minimum storage duration charge of 90 days versus the 30 days of standard in frequent access.
This class is the next step along the path of access frequency, as the access frequency of objects decrease, you can move them gradually from standard, then to standard in frequent access, and then to Glacier Instant Retrieval.
The important thing to remember about this specific S3 Glacier class is that you still have instant access to your data.
There's no retrieval process required, you can still use it like S3 standard and S3 standard in frequent access.
It's just that it costs you more if you need to access the data, but less if you don't.
Now let's move on to the next type of S3 Glacier Storage Class.
And the next one I want to talk about is S3 Glacier Flexible Retrieval, and this storage class was formally known as S3 Glacier.
The name was changed when the previously discussed Instant Retrieval class was added to the lineup of storage classes available within S3.
So Glacier Flexible Retrieval has the same three availability zone architecture as S3 standard and S3 standard in frequent access.
It has the same durability characteristics of 11-9s, and at the time of creating this lesson, S3 Glacier Flexible Retrieval has a storage cost which is about one-sixth of the cost of S3 standard.
So it's really cost effective, but there are some serious trade-offs which you have to accept in order to make use of it.
For the exam, it's these trade-offs which you need to be fully aware of.
Conceptually, I want you to think of objects stored with the Glacier Flexible Retrieval class as cold objects.
They aren't warm, they aren't ready for use, and this will form a good knowledge anchor for the exam.
Now because they're cold, they aren't immediately available, they can't be made public.
Well, you can see these objects within an S3 bucket, they're now just a pointer to that object.
To get access to them, you need to perform a retrieval process.
That's a specific operation, a job which needs to be run to gain access to the objects.
Now you pay for this retrieval process.
When you retrieve objects from S3 Glacier Flexible Retrieval, they're stored in the S3 standard in frequent access storage class on a temporary basis.
You access them and then they're removed.
You can retrieve them permanently by changing the class back to one of the S3 ones, but this is a different process.
Now retrieval jobs come in three different types.
We have expedited, which generally results in data being available within one to five minutes, and this is the most expensive.
We've got standard where data is usually accessible in three to five hours, and then a low cost bulk option where data is available in between five and 12 hours.
So the faster the job type, the more expensive.
Now this means that S3 Glacier Flexible Retrieval has a first byte latency of minutes or hours, and that's really important to know for the exam.
So while it's really cheap, you have to be able to tolerate, you can't make the objects public anymore, either in the bucket or using static website hosting, and two, when you do access the objects, it's not an immediate process.
So you can see the object metadata in the bucket, but the data itself is in chilled storage, and you need to retrieve that data in order to access it.
Now S3 Glacier Flexible Retrieval has some other limits, so a 40 kb minimum available size and a 90 day minimum available duration.
For the exam, Glacier Flexible Retrieval is for situations where you need to store archival data where frequent or real-time access isn't needed.
For example, yearly access, and you're OK with minutes to hours for retrieval operations.
So it's one of the cheapest forms of storage in S3, as long as you can tolerate the characteristics of the storage class, but it's not the cheapest form of storage.
That honor goes to S3 Glacier Deep Archive.
Now S3 Glacier Deep Archive is much cheaper than the storage class we were just discussing.
In exchange for that, there are even more restrictions which you need to be able to tolerate.
Conceptually, where S3 Glacier Flexible Retrieval, which data in a chilled state, Glacier Deep Archive is data in a frozen state.
Objects have minimum, so 40 kb minimum available size and 180 day minimum available duration.
Like Glacier Flexible Retrieval, objects cannot be made publicly accessible.
Access to the data requires a retrieval job.
Just like Glacier Flexible Retrieval, the jobs temporarily restore to S3 standard and frequent access, but those retrieval jobs take longer.
Standard is 12 hours and bulk is up to 48 hours, so this is much longer than Glacier Flexible Retrieval, and that's the compromise that you agree to.
The storage is a lot cheaper in exchange for much longer restore times.
Glacier Deep Archive should be used for data which is archival, which rarely, if ever, needs to be accessed, and where hours or days is tolerable for the retrieval process.
So it's not really suited to primary system backups because of this restore time.
It's more suited for secondary long-term archival backups or data which comes under legal or regulatory requirements in terms of retention length.
Now this being said, there's one final type of storage class which I want to cover, and that's intelligent tearing.
Now intelligent tearing is different from all the other storage classes which I've talked about.
It's actually the storage class which contains five different storage tiers.
With intelligent tearing, when you move objects into this class, there are a range of ways that an object can be stored.
It can be stored within a frequent access tier or an infrequent access tier, or for objects which are accessed even less frequently, there's an archive instant access, archive access, or deep archived set of tiers.
You can think of the frequent access tier like S3 standard and the infrequent access tier like S3 standard infrequent access, and the archive tiers are the same price of performance as S3, Glacier, instant retrieval, and flexible retrieval.
And the deep archive tier is the same price of performance as Glacier Deep Archive.
Now unlike the other S3 storage classes, you don't have to worry about moving objects between tiers.
With intelligent tearing, the intelligent tearing system does this for you.
Let's say that we have an object, say a picture of whiskers which is initially kind of popular and then not popular, and then it goes super viral.
Well if you store this object using the intelligent tearing storage class, it would monitor the usage of the object.
When the object is in regular use, it would stay within the frequent access tier and would have the same costs as S3 standard.
If the object isn't accessed for 30 days, then it would be moved automatically into the infrequent tier where it would stay while being stored at a lower rate.
Now at this stage you could also add configuration, so based on a bucket, prefix or object tag, any objects which are accessed less frequently can be moved into the three archive tiers.
Now there's a 90 day minimum for archive instant access, and this is fully automatic.
Think of this as a cheaper version of infrequent access for objects which are accessed even less frequently.
Crucially this tier, so archive instant access, still gives you access to the data automatically as and when you need it, just like infrequent access.
In addition to this, there are two more entirely optional tiers, archive access and deep archive.
And these can be configured so that objects move into them when they haven't been accessed for 98 to 270 days for archive access, or 180 through to 730 days for deep archive.
Now these are entirely optional, and it's worth mentioning that when objects are moved into these tiers, getting them back isn't immediate.
There's a retrieval time to bring them back, so only use these tiers when your application can tolerate asynchronous access patterns.
So archive instant access requires no application or system changes, it's just another tier for less frequently accessed objects with a lower cost.
Archive access and deep archive changes things, your applications must support these tiers because retrieving objects requires specific API calls.
Now if objects do stay in infrequent access or archive instant access, when the objects become super viral in access, these will be moved back to frequent access automatically with no retrieval charges.
Intelligent tiering has a monitoring and automation cost per 1000 objects instead of the retrieval cost.
So essentially the system manages the movement of data between these tiers automatically without any penalty for this management fee.
The cost of the tiers are the same as the base S3 tiers, standard and infrequent access, there's just the management fee on top.
So it's more flexible than S3 standard and S3 infrequent access, but it's more expensive because of the management fee.
Now intelligent tiering is designed for long-lived data where the usage is...
[Sounds of S3 storage] Changing or unknown, if the usage is static either frequently accessed or infrequently accessed, then you're better using the direct S3 storage class, either standard or infrequent access.
Intelligent tiering is only good if you have data where the pattern changes or you don't know it.
Now with that being said, that's all of the S3 storage classes which I want to cover.
That's at least enough technical information and context which you'll need for the exam and to get started in the real world.
So go ahead and complete the video and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson I want to cover S3 object storage classes.
Now this is something which is equally as important at the associate and the professional level.
You need to understand the costs relative to each other, the technical features and compromises, as well as the types of situations where you would and wouldn't use each of the storage classes.
Now we've got a lot to cover so let's jump in and get started.
The default storage class available within S3 is known as S3 Standard.
So with S3 Standard, when Bob stores his captures on S3 using the S3 ABI, the objects are stored across at least three availability zones.
And this level of replication means that S3 Standard is able to cope with multiple availability zone failure, while still safeguarding data.
So start with this as a foundation when comparing other storage classes, because this is a massively important part of the choice between different S3 storage classes.
Now this level of replication means that S3 Standard provides 11 nines of durability, and this means if you store 10 million objects within an S3 bucket, then on average you might lose one object every 10,000 years.
The replication uses MD5 checksums together with cyclic redundancy checks known as CRCs to detect and resolve any data issues.
Now when objects which are uploaded to S3 have been stored durably, S3 responds with HTTP 1.1 200 OK status.
This is important to remember for the-- [Pause] The exam, if you see this status, if S3 responds with a 200 code, then you know that your data has been stored durably within the product.
With S3 Standard, there are a number of components to how you'll build for the product.
You'll build a gigabyte per month fee for data stored within S3.
A dollar per gigabyte charge per transfer of data out of S3 and transfer into S3 is free, and then finally you have a price per 1,000 requests made to the product.
There are no specific retrieval fees, no minimum duration for objects stored, and no minimum object sizes.
Now this isn't true for the other storage classes, so this is something to focus on as a solutions architect and in the exam.
With S3 Standard, you aren't penalized in any way.
You don't get any discounts, but it's the most balanced class of storage when you look at the dollar cost versus the features and compromises.
Now S3 Standard makes data accessible immediately.
It has a first byte latency of milliseconds, and this means that when data is requested, it's available within milliseconds, and objects will be made publicly available.
This is either using S3 permissions or...
Or if you enable static website hosting and make all the contents of the bucket available to the public internet, if you're doing that, then S3 Standard supports both of these access architectures.
So for the exam, the critical point to remember is that S3 Standard should be used for frequently accessed data, which is important and non-replaceable.
It should be your default, and you should only investigate moving through the storage classes when you have a specific reason to do so.
Now let's move on and look at another storage class available within S3, and the next class I want to cover is S3 Standard in... [silence] which can be easily replaced.
So this means things like replica copies, if you're using same or cross region replication, then you can use this class for your replicated copy, or if you're generating intermediate data that you can afford to lose, then this storage class offers great value.
Don't use this for your only copy of data, because it's too risky.
Don't use this for critical data, because it's also too risky.
Don't use this for data which is frequently accessed, frequently changed or temporary data, because you'll be penalized by the duration and size minimums that this storage class is affected by.
Okay, so this is the end of part one of this lesson.
It was getting a little bit on the long side and I wanted to give you the opportunity to take a small break, maybe stretch your legs or make a coffee.
Part two will continue immediately from this point.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video I want to talk about S3 bucket keys which are a way to help S3 scale and reduce costs when using KMS encryption.
Let's jump in and take a look.
So let's look at a pretty typical architecture.
We have S3 in the middle, we have KMS on the right and inside we have a KMS key, the default S3 service key for this region which is named AWS/S3.
Then on the left we have a user Bob who's looking to upload some objects to this S3 bucket using KMS encryption.
Within S3 when you use KMS each object which is put into a bucket uses a unique data encryption key or DEK.
So let's have a look at how that works.
So when Bob begins his first PUT operation when the object is arriving in the bucket a call is made to KMS which uses the KMS key to generate a data encryption key unique to this object.
The object is encrypted and then the object and the unique data encryption key are stored side by side on S3.
Each object stored on S3 uses a unique data encryption key which is a single call to KMS to generate that data encryption key.
This means that for every single object that Bob uploads it needs a single unique call to KMS to generate a data encryption key to return that data encryption key to S3, use that key to encrypt the object and then store the two side by side.
On screen we have three individual PUTs.
But imagine if this was 30 or 300 or 300,000 every second.
This presents us with a serious problem.
KMS has a cost.
It means that using SSE-KMS carries an ever increasing cost which goes up based on the number of objects that you put into an S3 bucket.
And perhaps more of a problem is that there are throttling issues.
The generated data encryption key operation can only be run either 5,500, 10,000 or 50,000 times per second and this is shared across regions.
Now this exact number depends on which regions you use but this effectively places a limit on how often a single KMS key can be used to generate data encryption keys which limits the amount of PUTs that you can do to S3 every second.
And this is where bucket keys improve the situation.
So let's look at how.
So with bucket keys the architecture changes a little.
We have the same basic architecture but instead of the KMS key being used to generate each individual data encryption key, instead it's used to generate a time limited bucket key and conceptually this is given to the bucket.
This is then used for a period of time to generate any data encryption keys within the bucket for individual object encryption operations.
And this essentially offloads the work from KMS to S3.
It reduces the number of KMS API calls so reduces the cost and increases scalability.
Now it's worth noting that this is not retroactive.
It only affects objects and the object encryption process after it's enabled on a bucket.
So this is a great way that you can continue to use KMS for encryption with S3 but offload some of the intensive processing from KMS onto S3 reducing costs and improving scalability.
Now there are some things that you do need to keep in mind when you're using S3 bucket keys.
First, after you enable an S3 bucket key, if you're using CloudTrail to look at KMS logs, then those logs are going to show the bucket ARN instead of your object ARN.
Now additionally, because you're offloading a lot of the work from KMS to S3, you're going to see fewer CloudTrail events for KMS in those logs.
So that's logically offloading the work from KMS to S3 and instead of KMS keys being used to encrypt individual objects, they're used to generate the bucket key.
And so you're going to see the bucket in the logs not the object.
So keep that in mind.
Book keys also work with same region replication and cross region replication.
There are some nuances you need to keep in mind generally when S3 replicates an encrypted object.
It generally preserves the encryption settings of that encrypted object.
So the encrypted object in the destination bucket generally uses the same settings as the encrypted object in the source bucket.
Now if you're replicating a plain text object, so something that's not encrypted and you're replicating that through to a destination bucket which uses default encryption or an S3 bucket key, then S3 encrypts that object on its way through to the destination with the destination bucket's configuration.
And it's worth noting that this can result in e-tag changes between the source and the destination.
Now make sure that I include a link attached to this video which details all of these nuanced features when you're using S3 bucket keys together with same or cross region replication.
It's beyond the scope of this video, but it might be useful for the exam and the real world to be aware of these nuanced features and requirements as you're using the product.
Now with that being said, that is everything that I wanted to cover in this video.
So go ahead and complete the video and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson I just want to give you the opportunity to gain some practical experience of how S3 handles encryption.
So what we're going to do is create an S3 bucket and into that bucket we're going to put a number of objects and for each one we're going to utilize a different type of server-side encryption.
So we'll be uploading one object using SSE-S3 so S3 managed encryption and then one object using SSE-KMS which will utilize KMS for key management.
So once we've uploaded those objects we'll experiment with some permissions changes just to see how each of these different encryption types work.
So let's get started.
Now the first thing you'll need to check is that you're logged into the IAM admin user of the general AWS account and you need to have the Northern Virginia region selected.
Then let's move to S3 so click in the services drop-down type S3 and then open that in a new tab and then click to move to the S3 console.
Now I want you to go ahead and click on create bucket and then just create a bucket called catpicks and then put some random text on the end.
You should use something different and something that's unique to you.
Leave the region set as US East Northern Virginia scroll all the way down to the bottom and click on create bucket.
Next change across to the key management service console that will either be in your history or you'll have to type it in the services drop-down.
Once you hear go ahead and click on create key, pick symmetric key then expand advanced options, make sure KMS is selected as well as single region key.
Click next.
The alias we'll be using is catpicks, so type catpicks and click next.
Don't select anything for define key administrative permissions.
We'll not set any permissions on the key policy this time so just click next and then on this screen define key usage permissions just click next again without selecting anything.
So the key policy that's going to be created for this KMS key only trusts the account so only trust the account user of this specific AWS account and that's what we want.
So go ahead and click on finish.
At this point move back across to the S3 console, go into the catpicks bucket that you just created and I'll let you to go ahead and download the file that's attached to this lesson and then extract that file and inside the resultant folder are some objects that you're going to be uploading to this S3 bucket.
So go ahead and do that and then click on upload and files then locate the folder that you just extracted and go into it and you should see three images in that folder default -merlin.jpg, sse -kms -ginny.jpg and sse -s3 -dwees.jpg.
So that's good so the first one that we want to upload need to do these one by one because we're going to be configuring the encryption type to use.
So the first one is sse -s3 -dwees.jpg so select that and click on open, expand properties and then scroll down to serve aside encryption and it's here where you can specify to accept the bucket defaults by not specifying an encryption key or you can specify an encryption key.
Now when you pick to specify an encryption key you're again offered the ability to use the bucket default settings for encryption or you can override bucket settings and choose between two different types either Amazon s3 key which is sse -s3 or AWS key management service which is sse -kms.
Now for this upload we're going to use sse -s3 so Amazon s3 managed keys so select this option scroll all the way down to the bottom and click upload.
Wait for this upload process to complete and then click on close.
Now we're going to follow that same process again so click on upload again, add files.
This time sse -kms -ginny.
So select that click on open, expand properties and then scroll down to serve aside encryption then click specify an encryption key override bucket settings for default encryption and this time we're going to use sse -kms so select that and then select choose from your AWS kms keys and then you can either use the AWS managed key so this is the service default key so AWS forward slash s3 or you can choose your own kms key to use for this object.
What I'm going to do first is select this AWS managed key so the default key for this service and scroll all the way down to the bottom and click on upload.
Wait for this upload process to complete and then click on close so that will encrypt the object using this default s3 AWS managed kms key that's now inside kms.
I wanted to do that just to demonstrate how it automatically creates it so now let's go ahead and re-upload this object so click on upload, add files, select sse -kms -ginny.jpg and click on open, scroll down, expand properties, scroll down again, for service side encryption select specify an encryption key, select override bucket settings for default encryption, pick AWS key management service key so sse -kms, select choose from your AWS kms keys, click in the drop down and then select captics that we created earlier.
Once you've done that scroll all the way down to the bottom click on upload, wait for that process to complete and then click on close.
Now at this point we've got two different objects in this bucket and we're going to open both of these.
We're going to start with sse -s3 -dwees so let's click on it and then click on open and that works and then let's try the other object sse -kms -ginny so click on that and click on open and that also opens okay because IAM admin is a full administrator of the entire AWS account so that includes s3 and all the services including KMS.
Next what we're going to do is apply a deny policy on the IAM admin user which prevents us from using KMS so we'll stay as being a full account administrator and a full s3 administrator but we're going to block off the KMS service entirely and I want to demonstrate exactly what that does to our ability to open these three objects.
So click on services and either open the IAM console from the history or type it in the find services box and then click it.
Once we're here click on users select your IAM admin user, click add permissions and then create inline policy, click on the JSON tab and then delete the skeleton template that's in this box and then paste in the contents of the deny kms.json file and this is contained in the folder you extracted from this lessons download link and it's also attached to this lesson.
This is what it should look like the effect is to deny it denies any actions KMS call on star so any KMS actions on all resources so essentially this blocks off the entire KMS service for this user.
So go ahead and click on review policy call it deny KMS and click on create policy and this now means that IAM admin can no longer access KMS so now if we go back to the s3 console go inside the capex bucket we should still be able to open sse-s3 dweez.jpeg object.
If we click that click on open because this is encrypted using sse-s3 and this is completely internal to the s3 product we should have no problems owning this object because we have the permissions inside s3 to do anything in s3 but something different happens if we try to open the sse-kms-ginning object.
Now just to explain what will occur when I click on this open link s3 will then have to liaise with KMS and get KMS to decrypt the data encryption key that encrypts this object so we need to retrieve the encrypted data encryption key for this object and request that KMS decrypts it.
Now if that worked we'd get back the plain text version of that key and we would use it to decrypt this object and it would open up in a tab without any issues because we've got full rights over s3 we have permission to do almost all of that process but what we don't have permission to do is to actually get KMS to decrypt this encrypted data encryption key.
We don't have that permission because we just added a deny policy to the IAM admin user and as we know by now deny allow deny deny always wins and explicit deny always overrules everything else.
So when I click on open and s3 retrieves this encrypted data encryption key and gives it to KMS and says please decrypt this and give me the plain text back KMS is going to refuse and what we see when we do that is we get an access deny.
So now we've implemented this role separation so even though we have full s3 admin rights so if I went back to this bucket and I clicked on the sse-kms-ginning file and deleted it I would be able to delete that object because I have full control over s3 but I can't open it because I've prevented us accessing KMS and that's how we implement role separation.
So sse-kms is definitely the encryption type that you want to use if you've got these really extensive regulations or any security requirements around key control.
So let's go ahead and just remove that restriction so just go back to the IAM console.
I just want to do this before we forget and have problems later in the course.
Click on users click on IAM admin check the box next to deny KMS and then click remove and confirm that removal and that will allow us to access KMS again.
We can verify that by moving to the KMS console and we can bring up this list which proves that we've got some access to KMS again so that's good.
Now if we just go ahead and click on the AWS managed keys option on the left here this is where you'll be able to see this default encryption key that's used when you upload an object using sse-kms-incription but don't pick a particular key so this is now the default.
Now if we open this because it's an AWS managed key we don't have the ability to set any key rotation we can see the key policy here but we can't make any changes to it this is set by AWS when it creates it so that it only allows accesses from s3 so this is a fixed key policy but we can't control anything about this key.
Now contrast that with the customer managed keys that we've got and if we go into cat pics this is the key that we created now we can edit the key policy we could switch to policy view and make changes we've got the ability to control the key rotation so if you face any exam questions where you need to fully manage the keys that are used as part of the s3 encryption process then you've got to use sse-kms.
Now if we just return to the s3 console there's just one more thing that I want to demonstrate go into the cat pics bucket again click on properties locate default encryption and then click on edit and this is where you get the option to specify the default encryption to use for this bucket.
Now again this isn't a restriction this does not prevent anyone uploading objects to the bucket using a different type of encryption all it does is specify what the default is if the upload itself does not specify an encryption method so we could select Amazon s3 key which is sse-s3 and you might also see this referred to elsewhere as AES 256 it's also known by that name but we could also select the AWS key management service key known as sse-kms and this is where we can either choose to use the default key or pick a customer managed key that we want to use as the default for the bucket.
So let's just demonstrate that go ahead and select the cat pics key to use for this bucket then scroll down and click on save changes and that will set the defaults for the bucket and we can demonstrate that let's go ahead and click on the objects tab and we're going to upload a new object to this bucket so click on upload add files then go ahead and select the default hyphen Merlin object and click open scroll down click on upload and even though we didn't pick a particular encryption method for this object it's going to use the default settings that we picked for the bucket and now we can see that default hyphen Merlin.jpg object has been uploaded so if we open up default hyphen Merlin we can see it's using sse-kms as the service side encryption type and it's using the KMS key that we set in the default encryption settings on the bucket.
Okay well that's everything I wanted to cover in this demo lesson so let's just tidy up to make sure that we don't experience any charges go back to the Amazon S3 console select the bucket that you've created and then click on empty you need to confirm to empty that bucket once that process is completed and the bucket's emptied and then follow that same process but this time click on delete to delete the bucket from your AWS account click on key management service and we'll just mark the key that were created for deletion so select customer managed keys select cat pics click on key actions schedule key deletion set this to 7 which is the minimum check the box and click on schedule deletion with that being said though that's everything I wanted to cover in this demonstration I hope it's been fun and useful go ahead mark this video is complete and when you're ready I'll see you in the next video.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about S3 encryption.
Now we're going to be focusing on server-side encryption known as SSE, which I will be coaching on client-side encryption and how that's different.
Now we've got a lot to get through so let's jump in and get started.
Now before we start there's one common misconception which I want to fix right away, and that's that buckets aren't encrypted, objects are.
You don't define encryption at the bucket level.
There's something called bucket default encryption, but that's different and I'll cover that elsewhere in the course.
For now, understand that you define encryption at the object level, and each object in a bucket might be using different encryption settings.
Now before we talk about the ways that S3 natively handles encryption for objects, I think it's useful to just review the two main architectures of encryption which can be used with the product.
There's client-side encryption and server-side encryption, and both of these refer to what method is used for encryption at rest, and this controls how objects are encrypted as they're written to disk.
It's a method of ensuring that even if somebody were to get the physical disks from AWS which your data is on, they would need something else, a type of key to access that data.
So visually this is how a transaction between a group of users or an application and S3 looks like.
The users of the application on the left are loading data to an S3 endpoint for a specific bucket which gets stored on S3's base storage hardware.
Now it's a simplistic overview, but for this lesson it's enough.
I want to illustrate the difference between client-side encryption and server-side encryption.
So on the top we have client-side encryption, and on the bottom we have server-side encryption.
Now this is a really, really important point which often confuses students.
What I'm talking about in this lesson is encryption at rest, so how data is stored on disk in an encrypted way.
Both of these methods also use encryption in transit between the user-side and S3.
So this is an encrypted tunnel which means that you can't see the raw data inside the tunnel.
It's encrypted.
So ignoring any S3 encryption, ignoring how data is encrypted as it's written to disk, data transferred to S3 and from S3 is generally encrypted in transit.
Now there are exceptions, but use this as your default and I'll cover those exceptions elsewhere in the course.
So in this lesson when we're talking about S3 encryption, we're focusing on encryption at rest and not encryption in transit, which happens anyway.
Now the difference between client-side encryption and server-side encryption is pretty simple to understand when you see it visually.
With client-side encryption, the objects being uploaded are encrypted by the client before they ever leave, and this means that the data is ciphertexted the entire time.
From AWS's perspective, the data is received in a scrambled form and then stored in a scrambled form.
AWS would have no opportunity to see the data in its plain text form.
With server-side encryption known as SSE, it's slightly different.
Here, even though the data is encrypted in transit using HTTPS, the objects themselves aren't initially encrypted, meaning that inside the tunnel, the data is in its original form.
Let's assume it's animal images.
So you could remove the HTTP encrypted tunnel somehow and the animal pictures would be in plain text.
Now once the data hits S3, then it's encrypted by the S3 servers, which is why it's referred to as server-side encryption.
So to high level, the differences are with client-side encryption, everything is yours to control.
You take on all of the risks and you control everything, which is both good and bad.
You take the original data, you are the only one who ever sees the plain text version of that data, you generate a key, you hold that key and you manage that key.
You are responsible for recording which key is used for which object, and you perform the encryption process before it's uploaded to S3, and this consumes CPU capacity on whatever device is performing the encryption.
You just use S3 for storage, nothing else.
It isn't involved in the encryption process in any way, so you own and control the keys, the process and any tooling.
So if your organization needs all of these, if you have real reasons that AWS cannot be involved in the process, then you need to use client-side encryption.
Now with server-side encryption known as SSE, you allow S3 to handle some or all of that process, and this means there are parts that you need to trust S3 with.
How much of that process you trust S3 with and how you want the process to occur and determine which type of server-side encryption you use as there are multiple types.
Now AWS has recently made server-side encryption mandatory, and so you can no longer store objects in an unencrypted form on S3.
You have to use encryption at rest.
So let's break apart server-side encryption and review the differences between each of the various types.
There are three types of server-side encryption available for S3 objects, and each is a trade-off of the usual things, trust, overhead, cost, resource consumption and more.
So let's quickly step through them and look at how they work.
The first is SSE-C, and this is server-side encryption with customer-provided keys.
Now don't confuse this with client-side encryption because it's very different.
The second is SSE-S3, which is server-side encryption with Amazon S3 managed keys, and this is the default.
The last one is an enhancement on SSE-S3, which is SSE-KMS, and this is server-side encryption with KMS keys stored inside the AWS Key Management Service, known as KMS.
Now the difference between all of these methods is what parts of the process you trust S3 with and how the encryption process and key management is handled.
At a high level, there are two components to server-side encryption.
First, the encryption and decryption process.
This is the process where you take plain text, a key and an algorithm, and generate cyber text.
It's also the reverse, so taking that cyber text and a key and using an algorithm to output plain text.
Now this is symmetrical encryption, so the same key is used for both encryption and decryption.
The second component is the generation and management of the cryptographic keys, which are used as part of the encryption and decryption processes.
These three methods of server-side encryption, they handle these two components differently.
Now let's look at how.
Now before we do, again, I just want to stress that SSE is now mandatory on objects within S3 buckets.
This process will occur, you cannot choose not to use it.
The only thing that you can influence is how the process happens and what version of SSE is utilized.
Now first, with SSE-C, the customer is responsible for the keys, and S3 manages the encryption and decryption processes.
So the major change between client-side encryption and this is that S3 are handling the cryptographic operations.
Now this might sound like a small thing, but if you're dealing with millions of objects and a high number of transactions, then the CPU capability required to do encryption can really add up.
So you're essentially offloading the CPU requirements of this process to AWS, but you still need to generate and manage the key or keys.
So when you put an object into S3 using this method, you provide the plain text object and an encryption key.
Remember this object is encrypted in transit by HTTPS on its way to S3, so even though it's plain text right now, it's not visible to an external observer.
When it arrives at S3, the object is encrypted and a hash of the key is tagged to the object and the key is destroyed.
Now this hash is one way, it can't be used to generate a new key, but if a key is provided during decryption, the hash can identify if that specific key was used or not.
So the object and this one-way hash are stored on disk, assistantly.
Remember S3 doesn't have the key at this stage.
To decrypt, you need to provide S3 with the request and the key used to encrypt the object.
If it's correct, S3 decrypts the object, discards the key and returns the plain text.
And again, returning the object is done over an encrypted HTTPS tunnel, so from the perspective of an observer, it's not visible.
Now this method is interesting.
You still have to manage your keys, which does come with a cost and some effort, but you also retain control of that process, which is good in some regulation-heavy environments.
You also save on CPU requirements versus client-side encryption, because S3 performs encryption and decryption, meaning smaller devices don't need to consume resources for this process.
But you need to trust that S3 will discard the keys after use, and there are some independent audits which prove what AWS does and doesn't do during this process.
So you choose SSE-C when you absolutely need to manage your own keys, but are happy to allow S3 to perform the encryption and decryption processes.
You would choose client-side encryption when you need to manage the keys and also the encryption and decryption processes, and you might do this if you never want AWS to have the ability to see your plain text data.
So let's move on to the next type of server-side encryption, and the type I want to describe now is SSE-S3.
And with this method, AWS handles both the encryption processes as well as the key generation and management.
When putting an object into S3, you just provide the plain text data.
When an object is uploaded to S3 using SSE-S3, it's encrypted by a key which is unique for every object, so S3 generates a key just for that object, and then it uses that key to encrypt that object.
For extra safety, S3 has a key which it manages as part of the service.
You don't get to influence this, you can't change any options on this key, nor do you get to pick it.
It's handled end-to-end by S3.
From your perspective, it isn't visible anywhere in the user interface, and it's rotated internally by S3 out of your visibility and control.
This key is used to encrypt the per-object key, and then the original key is discarded.
What we're left with is a ciphertext object and a ciphertext key, and both of these are persistently stored on disk.
With this method, AWS take over the encryption process just as with SSE-C, but they also manage the keys on your behalf, which means even less admin overhead.
The flip side with this method is that you have very little control over the keys used.
The S3 key is outside of your control, and the keys used to encrypt and encrypt objects are also outside of your control.
For most situations, SSE-S3 is a good default type of encryption which makes sense.
It uses a strong algorithm, AES256, the data is encrypted at rest and the customer doesn't have any admin overhead to worry about, but it does present three major problems.
Firstly, if you're in an environment which is strongly regulated, where you need to control the keys used and control access to the keys, then this isn't suitable.
If you need to control rotation of keys, this isn't suitable.
And then lastly, if you need role separation, this isn't suitable.
What I mean by role separation is that a full S3 administrator, somebody who has full S3 permissions to configure the bucket and manage the objects, then he or she can also decrypt and view data.
You can't stop an S3 full administrator from viewing data when using this type of server-side encryption.
And in certain industry areas such as financial and medical, you might not be allowed to have this small and open access for service administrators.
You might have certain groups within the business who can access the data but can't manage permissions, and you might have requirements for another SIS admin group who need to manage the infrastructure but can't be allowed to access data within objects.
And with SSE-S3, this cannot be accomplished in a rigorous best practice way.
And this is where the final type of server-side encryption comes in handy.
The third type of server-side encryption is server-side encryption with AWS Key Management Service Keys, known as SSE-KMS.
How this differs is that we're now involving an additional service, the Key Management Service, or KMS.
Instead of S3 managing keys, this is now done via KMS.
Specifically, S3 and KMS work together.
You create a KMS key, or you can use the service default one, but the real power and flexibility comes from creating a customer-managed KMS key.
It means this is created by you within KMS, it's managed by you, and it has isolated permissions, and I'll explain why this matters in a second.
In addition, the key is fully configurable.
Now this seems on the surface like a small change, but it's actually really significant in terms of the capabilities which it provides.
When S3 wants to encrypt an object using SSE-KMS, it has to liaise with KMS and request a new data encryption key to be generated using the chosen KMS key.
KMS delivers two versions of the same data encryption key, a plain text version and an encrypted or cipher text version.
S3 then takes the plain text object and the plain text data encryption key and creates an encrypted or cipher text object, and then it immediately discards the plain text key, leaving only the cipher text version of that key and both of these are stored on S3 storage.
So you're using the same overarching architecture, the per object encryption key, and the key which encrypts the per object key, but with this type of server-side encryption, so using SSE-KMS, KMS is generating the keys.
Now KMS keys can only encrypt objects up to 4KB in size, so the KMS key is used to generate data encryption keys which don't have those limitations.
It's important to understand that KMS doesn't store the data encryption keys, it only generates them and gives them to S3.
But you do have control over the KMS key, the same control as you would with any other customer-managed KMS key.
So in regulated industries, this alone is enough reason to consider SSE-KMS because it gives fine-grained control over the KMS key being used as well as its rotation.
You also have logging and auditing on the KMS key itself, and with CloudTrail you'll be able to see any calls made against that key.
But probably the best benefit provided by SSE-KMS is the role separation.
To decrypt an object encrypted using SSE-KMS, you need access to the KMS key which was originally used.
That KMS key is used to decrypt the encrypted copy of the data encryption key for that object which is stored along with that object.
If you don't have access to KMS, you can't decrypt the data encryption key, so you can't decrypt the object, and so it follows that you can't access the object.
Now what this means is that if we had an S3 administrator, and let's call him Phil, because we're using SSE-KMS, it means Phil as an S3 administrator does have full control over this bucket.
But because Phil has been given no permissions on the specific KMS key, he can't read any objects.
So he can administer the object as part of administering S3, but he can't see the data within those objects because he can't decrypt the data encryption key using the KMS key because he has no permissions on that KMS key.
Now this is an example of role separation, something which is allowed using SSE-KMS versus not allowed using SSE-S3.
With SSE-S3, Phil as an S3 administrator could administer and access the data inside objects.
However, using SSE-KMS, we have the option to allow Phil to view data in objects or not, something which is controllable by granting permissions or not on specific KMS keys.
So time for a quick summary before we finish this lesson, and it's really important that you understand these differences for any of the AWS exams.
With client-side encryption, you handle the key management and the encryption and decryption processes.
Use this if you need to control both of those and don't trust AWS and their regular audits.
This method uses more resources to manage keys as well as resources for actually performing the encryption and decryption processes at scale.
But it means AWS never see your objects in plain text form because you handle everything end to end.
This generally means you either encrypt all objects in advance or use one of the client-side encryption SDKs within your application.
Now please don't confuse client-side encryption with server-side encryption, specifically SSE-C.
Client-side encryption isn't really anything to do with S3, it's not a form of S3 encryption, it's different.
You can use client-side encryption and server-side encryption together, there's nothing preventing that.
So now let's step through server-side encryption, and remember this is now on by default, it's mandatory.
The only choice you have is which method of SSE to use.
With SSE-C you manage the encryption keys, you can use the same key for everything, but that isn't recommended.
Or you can use individual keys for every single object, the choice is yours.
S3 accepts your choice of key and an object and it handles the encryption and decryption processes on your behalf.
This means you need to trust S3 with the initial plain text object and trust it to discard and not store the encryption key.
But in exchange S3 takes over the computationally heavy encryption and decryption processes.
And also keep in mind that the data is transferred in a form where it's encrypted in transit using HTTBS.
So nobody outside AWS will ever have exposure for plain text data in any way.
SSE-S3 uses AES-256, I mention this because it's often the way exam questions test your knowledge.
If you see AES-256, think SSE-S3.
With SSE-S3, S3 handles the encryption keys and the encryption process.
It's the default and it works well for most cases, but you have no real control over keys, permissions or rotation.
And it also can't handle role separation, meaning S3 for admins can access the data within objects that they manage.
Finally we have SSE-KMS which uses KMS and KMS keys which the service provides.
You can control key rotation and permissions, it's similar in operation to SSE-S3, but it does allow role separation.
So use this if your business has fairly rigid groups of people and compartmentalised sets of security.
You can have S3 admins with no access to the data within objects.
Now for all AWS exams make sure you understand the difference between client side and server side encryption.
And then for server side encryption try and pitch scenarios where you would use each of the three types of server side encryption.
Now that's everything I wanted to cover in this lesson about object encryption, specifically server side encryption.
Go ahead and complete this lesson, but when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this demo lesson I just wanted to give you a bit of practical exposure to KMS.
And in this demo we're going to continue using the scenario that we've got the cat ruler trying to send encrypted battle plans to the robot general.
Now to get started just make sure that you're logged in into the IAM admin user of the general AWS account, so the management account of the organisation.
As always you'll need to have the Northern Virginia region selected and once you do go ahead and move across to the KMS console.
So click in the search box at the top, type KMS and then open that in a new tab.
And then click on create key because we'll be creating a KMS key.
Now KMS allows the creation of symmetric or asymmetric keys and just to keep this demonstration simple we're going to demonstrate using a symmetric key.
So make sure this option selected and then just to demonstrate some of these options just expand this and this is where you can set the key material origin.
If you recall from the previous lesson I talked about how the physical backing key material could be generated by KMS or imported and this is where you select between those two options.
Now I'm not going to talk about the custom key store at this point in the course.
I'll talk about this in more detail later on in the course when I talk about cloud HSM which is a different product entirely.
For now just select KMS and this will make KMS generate this physical backing key material that we'll be using to do our cryptographic operations.
Now historically KMS was a single region service which means that keys created in the product could never leave the region that they were created in.
More recently KMS has extended this functionality allowing you to create multi region keys.
Now for the purpose of this demo lesson we're only going to be using a single region key which is the default.
So make sure that single region key is selected and then we can continue to go ahead and click on next.
Now in the previous lesson I mentioned how a key has a unique ID but also that we could create an alias which references the key.
And that's what we can do here so I'm going to go ahead and create an alias and I'm going to call the alias cat robot all onward.
So type in cat robot and click next.
Now I discussed earlier how a KMS key has a key policy and a key policy is a resource policy that applies to the key.
Now it's here where we can specify the key administrators for a particular key.
There is a difference between identities that can manage a key and identities that can use a key for cryptographic operations like encrypt or decrypt.
It's this point where we define who can manage this key.
So go ahead and just check the box next to I am admin.
So that will make sure that our I am admin user can administer this key.
Once you do that scroll down there's a box here that allows administrators also to delete this key and that's a default so we'll leave that as is and click next.
Now the previous set is where we defined who had admin permissions on the key.
This stage lets us define who can use the key for cryptographic operations so encrypt and decrypt.
To keep things simple we're going to define a key which adds the relevant entries to the key policy.
So just check the box next to I am admin which is the user that we logged in as and then just scroll down.
So if you wanted to add other AWS accounts here so that they had permission to use this key but for this demonstration we don't need to do that.
So just click on next and this is the key policy that this wizard has created.
So if I just scroll down it assigns the account level trust at the top so the account itself, the account user is allowed to perform any KMS column actions on this key.
So that is the part of the policy of this statement here which means that this key will trust this account.
It's this statement that defines the key administrators so that I am admin inside this account can create and describe and enable and list and all the other admin style permissions.
Scroll down further still it's this statement that allows I am admin to perform encryption and decrypt and re-encrypt and generate data key and describe key actions against this key.
So all the permissions that we define are inside the key policy.
So this point go ahead click on finish and that will create the key as well as the alias that we use to reference this key.
If we go into this key I'll just show you some of the options that are available.
We'll be able to obviously edit the key policy and we can define key rotation.
So by default key rotation for a customer managed key is switched off and we can enable it to rotate the key once every year.
For an AWS managed key that is by default turned on and you can't switch it off and it also performs a rotation approximately once every year.
So just click on AWS managed keys as we go through the course and start turning on encryption for various different services.
You'll notice how each service the first time it uses encryption with KMS it creates an AWS managed key in this list.
Now that's everything we need to do on the AWS side.
Now we can start using this key to perform some cryptographic operations.
So let's do that.
Now at this point rather than using the local command line interface on your local machine we're going to be using Cloud Shell.
This allows us to use the same set of instructions regardless of your local operating system.
So to launch Cloud Shell click on this icon and this will take a few minutes but it will put you at the shell that's using your currently logged in user in order to gain permission.
So any commands you run in the shell will be run as your currently logged in user.
So the first thing that we'll be doing is to create a plain text battle plan.
So this is the message that the cab ruler is going to be sending to the robot general.
To generate that file we'll use echo and then space and then a speechmark and then a small message and the message is going to be find all the doggos and then a comma distract them with the yums.
So find all the doggos, distract them with the yums and then a speechmark to close that off and then we'll redirect that to a file called battleplans.txt.
And then presenter.
Now the commands to interact with KMS from the command line are fairly long so what I'll do is paste it in and then I'll step through it line by line and explain exactly what it's doing.
So first we need to encrypt the plain text battle plans and we want the result to be a cipher text document something that we can pass to the robot general which can't be intercepted en route and can be decrypted at the other end.
So this is the command that we need to run and I just want to step through this line by line.
The top part should be fairly obvious so we're running the AWS command line tools, the KMS module and using the encrypt action.
So this specifies that we want to encrypt piece of data.
This line specifies the alias that we want to use to encrypt this piece of data.
You can either specify the key ID which uniquely identifies a key or you can specify an alias using the alias forward slash and then the name of the alias.
In this case I've lept it to do that so this is using the alias that we created in the first part of this demo lesson.
This line is where we're providing the plain text to KMS and instead of typing the plain text on the command line we're telling it to consult this file so battleplans.txt.
Now the next line is telling the command line tools to output the result as text and it's going to be a text output with a number of different fields.
The next line, double hyphen query, is telling the command line tools to select one particular field and that's the field cipher text blob and it's this field that contains the cipher text output from the KMS command.
Now the output of any of these commands that interact with KMS is going to be a base64 encoded file so it's not going to be binary data, it's going to be base64 encoded.
What we want to do is have our output being a binary encrypted file and so we need to take the result of this encryption command and pass it to a utility called base64 and that utility using this command line option will decode that base64 and place the result into a not_battleplans.enc file and this is going to be our result in cipher text.
Now I know that command is relatively complex, KMS is not the easiest part of AWS to use from the command line but I did want to step you through line by line so you didn't know what each line achieved.
Ok so let's go ahead and run this command, to do that we need to click on paste and then once that's pasted into a cloud shell press enter to run the command and the output not_battleplans will be our encrypted cipher text.
So if I run a cat not_battleplans we get binary encrypted data so obviously anyone looking from the outside will just see scrambled data and won't understand what the message is.
So now I'm going to clear the screen to make it a little bit easier to see and this is the encrypted cipher text file that we could transfer across to the robot general.
So now we need to assume in this scenario that we're now the robot general and we're looking to decrypt this file.
Ok so now I'm going to paste the next command for this lesson which is the decrypt command and I'll be stepping through line by line just explaining exactly what each line accomplishes.
So this is the command that you use to decrypt the cipher text and give us the original plain text battle plans.
So first this top line should be logical we're running the AWS command line tools with the KMS module and the decrypt command.
We're passing in some cipher text so we use the command line option double hyphen cipher text blob and instead of pasting this on the command line we're giving it this file so not_battleplans.enc.
We're again asking for the output to be in text.
This will output some text with a number of different fields we're using the query field query for the plain text field and again the output will be base 64 encoded and so we're using the base 64 utility with the double hyphen decode to decode that back into its original form and store that into a file called decryptorplans.txt.
So let's go ahead and run this so click paste and then press enter to run this command.
This will decrypt cipher text and it will output decryptorplans.txt.
And if we catch that document we'll see the original message.
Find all the doggos, distract them with the yums and that's just been a really simple demonstration of using the KMS encrypt command and the KMS decrypt command.
A couple of things I wanted to draw your attention to throughout the process.
With the encrypt command we needed to pass in the key to use as well as the plain text and we got out the cipher text.
With the decrypt command we don't need to specify the key, we only give the cipher text and assuming we have permissions on the KMS key so that we can use it to perform decrypt operations then we'll get the decryptor plain text and that's what's happened here.
Now just to clear up from this lesson if you go back to the AWS console make sure you're in US East 1 so Northern Virginia and go back to the key management service console and we're just going to delete the KMS key that we created earlier in this lesson.
So click on customer managed keys, select the KMS key that we created earlier, my case, cap robot then click on key actions and schedule key deletion.
You need to enter a waiting period between 7 and 30 days since you want this cleared up as fast as possible going into 7, tick the box to confirm and then schedule delete.
And that'll put the key into a pending deletion state and after 7 days it'll be entirely removed.
And at that point we've cleared up all of the assets that we've used in this demo lesson so go ahead and complete the video and when you're ready join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this video where I'm going to be talking about the key management service known as KMS.
Now this product is used by many other services within AWS when they use encryption, so don't be surprised if you're watching this video in what seems like a pretty random place within the course.
With that being said let's jump in and get started.
Now KMS isn't all that complex as a product.
Once you understand it it's pretty simple, but because of how much it's used by other AWS products and services it's essential that you do understand it for all the AWS exams.
Now KMS is a regional and public service.
Every region is isolated when using KMS.
Think of it as a separate product.
Now KMS is capable of some multi-region features but I'll be covering those in a separate dedicated video.
It's a public service which means it occupies the AWS public zone and can be connected to from anywhere with access to this zone.
Like any other AWS service though you will need permissions to access it.
Now KMS as the name suggests manages keys.
Specifically it lets you create, store and manage cryptographic keys.
These are keys which can be used to convert plain text to ciphertext and vice versa.
Now KMS is capable of handling both symmetric and asymmetric keys.
And at this point you should understand what that means.
Where symmetric keys are used, where public asymmetric are used, as well as private asymmetric.
Just know that KMS is capable of operating with all of these different key architectures.
Now KMS is also capable of performing cryptographic operations which includes, but is not limited to, encryption and encryption operations.
And I'll be talking more about this later in this video.
Now one of the foundational things to understand about KMS is that cryptographic keys never leave the product.
KMS can create keys, keys can be imported, it manages keys, it can use these keys to perform operations but the keys themselves are locked inside the product.
Its primary function is to ensure the keys never leave and held securely within the service.
Now KMS also provides a FIPS 140-2 compliant service.
This is a US security standard but try to memorize this.
It's FIPS 140-2 level 2 to be specific.
Again the level 2 part matters.
It's often a key point of distinction between using KMS versus using something like cloud HSM which I'll be covering in detail elsewhere.
Now some of KMS's features have achieved level 3 compliance but overall it's level 2.
Again please do your best to remember this.
It will come in handy for most of the AWS exams.
Now before we continue, since this is an introduction video, unless I state otherwise, assume that I'm talking about symmetric keys.
When I mention keys within this video, I'm going to be covering the advanced functionality of KMS in other videos including asymmetric keys but for this one I'm mainly focusing on its architecture and high level functions.
So just assume I'm talking about symmetric keys from now on unless I indicate otherwise.
Now the main type of key that KMS manages are known logically enough as KMS keys.
You might see these referred to as CMKs or Customer Master Keys but that naming scheme has been superseded so they're now called KMS keys.
These KMS keys are used by KMS within cryptographic operations.
You can use them, applications can use them and other AWS services can use them.
Now they're logical, think of them as a container for the actual physical key material and this is the data that really makes up the key.
So a KMS key contains a key ID, a creation date, a key policy which is a resource policy, a description and a state of the key.
Every KMS key is backed by physical key material, it's this data which is held by KMS and it's this material which is actually used to encrypt and decrypt things that you give to KMS.
The physical key material can be generated by KMS or imported into KMS and this material contained inside a KMS key can be used to directly encrypt or decrypt data up to 4KB in size.
Now this might sound like a pretty serious limitation.
KMS keys are generally only used to work on small bits of data or to generate other keys and I'll be covering this at a high level later in this video.
Let's look visually at how KMS works so far.
So this is KMS and this is Ashley.
Ashley's first interaction with KMS after picking a region is to create a KMS key.
A KMS key is created with physical backing material and this key is stored within KMS in an encrypted form.
Nothing in KMS is ever stored in plain text form persistently.
It might exist in memory in plain text form but on disk it's encrypted.
Now Ashley's next interaction with KMS might be to request that some data is encrypted.
To do this she makes an encrypt call to KMS specifying the key to use and providing some data to encrypt.
KMS accepts the data and assuming Ashley has permissions to use the key, it decrypts the KMS key then uses this key to encrypt the plain text data that Ashley supplied and then returns that data to Ashley.
Notice how KMS is performing the cryptographic operations.
Ashley is just providing data to KMS together with instructions and it's handling the operations internally.
Logically at some point in the future Ashley will want to decrypt this same data so she calls a decrypt operation and she includes the data she wants to decrypt along with this operation.
KMS doesn't need to be told which KMS key to use for the decrypt operation.
That information is encoded into the cyber text of the data which Ashley wants to decrypt.
The permissions to decrypt are separate from the permissions to encrypt and are also separate from permissions which allow the generation of keys but assuming Ashley has the required permissions for a decrypt operation using this specific KMS key, KMS decrypts the key and uses this to decrypt the cyber text provided by Ashley and returns this data back to Ashley in plain text form.
Now again I want to stress at no point during this entire operation do the KMS keys leave the KMS product.
At no point are the keys stored on the disk in plain text form and at each step Ashley needs permissions to perform the operations and each operation is different.
KMS is very granular with permissions.
You need individual permissions for various operations including encrypt and decrypt and you need permissions on given KMS keys in order to use those keys.
Ashley could have permissions to generate keys and to use keys to encrypt and decrypt or she could have just one of those permissions.
She might have permissions to encrypt data but not decrypt it or she might have permissions to manage KMS creating keys and setting permissions but not permissions to use keys to encrypt or decrypt data and this process is called "Rowl Separation".
Now I mentioned at the start of this lesson that the KMS key can only operate cryptographically on data, which is a maximum of 4kb in size.
Now that's true, so let's look at how KMS gets around this.
Data encryption keys, also known as DEX or D-E-Ks, are another type of key which KMS can generate.
They're generated using a KMS key, using the generate data key operation.
This generates a data encryption key which can be used to encrypt and decrypt data which is more than 4kb in size.
Data encryption keys are linked to the KMS key which created them, so KMS can tell that a specific data encryption key was created using a specific KMS key.
But, and this is pretty much the most important thing about KMS and data encryption keys, KMS doesn't store the data encryption key in any way.
It provides it to you or the service using KMS and then it discards it.
The reason it discards it is that KMS doesn't actually do the encryption or decryption of data using data encryption keys.
You do or the service using KMS performs those operations.
So let's look at how this works.
The data encryption key is generated, KMS provides you with two versions of that data encryption key.
First, a plain text version of that key, something which can be used immediately to perform cryptographic operations.
And second, a ciphertext or encrypted version of that same data encryption key.
The data encryption key is encrypted using the KMS key that generated it.
And in future, this encrypted data encryption key can be given back to KMS for it to be decrypted.
Now the architecture is that you would generate a data encryption key immediately before you wanted to encrypt something.
You would encrypt the data using the plain text version of the data encryption key and then once finished with that process, discard the plain text version of that data encryption key.
That would leave you with the encrypted data and you would then store the encrypted data encryption key along with that encrypted data.
Now a few key things about this architecture.
KMS doesn't actually do the encryption or decryption on data larger than 4KB using data encryption keys.
You do or the service using KMS does.
KMS doesn't track the usage of data encryption keys.
That's also you or the service using KMS.
You could use the same data encryption key to encrypt 100 or a million files or you could request a new data encryption key for each of those million files.
How you decide to do this is based on your exact requirements and of course AWS services will make this choice based on their requirements.
By storing the encrypted data encryption key on disk with the encrypted data, you always have the correct data encryption key to use.
But both the deck and the data are encrypted so administration is easy and security is maintained.
When you're encrypting that data is simple.
You pass the encrypted data encryption key back to KMS and ask for it to decrypt it using the same KMS key used to generate it.
Then you use the decrypted data encryption key that KMS gives you back and decrypt the data with it and then you discard the decrypted data encryption key.
Services such as S3 when using KMS generate a data encryption key for every single object.
They encrypt the object and then discard the plain text version.
As we move through the course I'll be talking in detail about how those services integrate with KMS for encryption services.
Before we finish up with this lesson there are a few key concepts which I want to discuss.
The one thing which is really important to grasp with KMS is that by default KMS keys are stored within the KMS service in that specific region.
They never leave the region and they never leave the KMS service.
You cannot extract a KMS key.
Any interactions with a KMS key are done using the APIs available from KMS.
Now this is the default but KMS does support multi-region keys where keys are replicated to other AWS regions.
But I'll be covering that in a dedicated video if required for the course that you're studying.
In KMS as a product keys are either AWS owned or customer owned.
We're going to be dealing mainly with customer owned keys.
AWS owned keys are a collection of KMS keys that an AWS service owns and manages for use in multiple AWS accounts.
They operate in the background and you largely don't need to worry about them.
If applicable for the course that you're studying I'll have a separate video on this.
If not don't worry it's unimportant.
Now in dealing with customer owned keys there are two types.
AWS managed and customer managed and I'll be covering the specifics of these in a dedicated video.
AWS managed keys are created automatically by AWS when you use a service such as S3 which integrates with KMS.
Customer managed keys are created explicitly by the customer to use directly in an application or within an AWS service.
Customer managed keys are more configurable.
For example you can edit the key policy which means you could allow cross account access so that other AWS accounts can use your keys.
AWS managed keys can't really be customized in this way.
Both types of keys support rotation.
Rotation is where physical backing materials are the data used to actually do cryptographic operations is changed.
With AWS managed keys this can't be disabled.
It's set to rotate approximately once per year.
With customer managed keys rotation is optional.
It's enabled by default and happens approximately once every year.
A KMS key contains the backing key, the physical key material and all previous backing keys caused by rotation.
It means that as a key is rotated data encrypted with all versions can still be decrypted.
Now you can create aliases which are shortcuts to keys.
So you might have an alias called my app one which points at a specific KMS key.
That way KMS keys can be changed if needed.
But be aware the aliases are also per region.
You can create my app one in all regions but in each region it will point at a different key.
Neither aliases or keys are global by default.
Okay to finish up this KMS 101 lesson I want to talk at high level about permissions on KMS keys.
Permissions on keys are controlled in a few ways.
KMS is slightly different than other AWS services that you come across in terms of how keys are handled.
Many services will always trust the account that they're contained in.
Meaning if you grant access via an identity policy that access will be allowed unless there's an explicit deny.
KMS is different.
This account trust is explicitly added on a key policy or not.
The starting point for KMS security is the key policy.
This key policy is a type of resource policy like a bucket policy only on a key.
Every KMS key has one and for custom managed keys you can change it.
To reiterate this the reason the key policy is so important is that unlike other AWS services KMS has to explicitly be told that keys trust the AWS account that they're contained within.
And this is what a key policy might look like.
It means that the key will allow the account 11112222333 to manage it.
This trust isn't automatic so be careful when updating it.
You always need this type of key policy in place if you want to be able to grant access to a key using identity policies.
The key doesn't trust the AWS account and this means that you would need to explicitly add any permissions on the key policy itself.
Generally KMS is managed using this combination of key policies trusting the account and then using identity policies to let IAM users interact with the key.
But in high security environments you might want to remove this account trust and insist on any key permissions being added inside the key policy.
And a typical IAM permissions policy for KMS might look something like this which gives the holder of the policy the rights to use this key to encrypt or decrypt data.
Inside KMS permissions are very granular and can be split based on function.
You can be granted rights to create keys and manage keys but not to have permissions to perform cryptographic operations like encrypt or decrypt.
This way your product administrators are given rights to access data encrypted by KMS which is a common requirement of many higher security environments.
Now there's another way to interact with KMS using grants but I'll be covering this elsewhere in another video if needed.
So that's everything I wanted to cover in this KMS introduction video.
This video is going to form the foundation for others in this series depending on the topic that you're studying there might be no more videos or many more videos.
Don't be worried it'll be the case.
Now at this point that's everything I wanted to talk about though about KMS at this introductory level.
Go ahead and complete the video and when you're ready I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this demo lesson, I just want to give you a really quick overview of the S3 performance improvement feature that we talked about in the previous lesson.
Now, I want to make this really brief because it's not something that I can really demonstrate effectively because I have a relatively good internet connection.
So I'm not going to be able to achieve the differences in performance that will effectively demonstrate this feature.
But we can see how to enable it and then use an AWS tool to see exactly what benefits we can expect.
So to enable it, we can move to the S3 console and just create a bucket.
Now, you won't be able to use bucket names with periods in with this accelerator transfer tool.
So this would not be a valid bucket name to use.
It cannot have periods in the name.
So I'm going to go ahead and create a test bucket.
I'm going to call it test AC1279, which is something I don't believe I've used before.
Now, you should go ahead and create a bucket yourself just to see where to enable this, but you can just follow along with what I'm doing.
This won't be required at any point further down the line in the course.
So if you just want to watch me in this demo, that's completely okay.
So I'll create this bucket.
I'll accept the rest of the defaults.
Then I'll select the bucket and go to properties and it's right down at the bottom where you enable transfer acceleration.
So this is an on/off feature.
So if I select it, I can enable transfer acceleration or disable it.
And when I enable it, it provides a new endpoint for this bucket to utilize the transfer acceleration feature.
So that's important.
You need to use this specific endpoint in order to get the benefit of these accelerator transfers.
And this will resolve to an edge location that is highly performant wherever you're physically located in the world.
So it's important to understand that you need to use this endpoint.
So enable this on the bucket and click on save and that's all you need to do.
That enables the feature.
Now, as I mentioned at the start of this lesson, this is not the easiest feature to demonstrate, especially if you have a good internet connection.
This is much better as a demonstration if I was on a suboptimal connection, which I'm not right now.
But what you can do is click on the URL which is attached to this lesson, which opens a comparison tool.
So what this is going to do now is it's going to compare the direct upload speeds that I can achieve from my local machine to this specific S3 bucket.
And it's going to do so with and without transfer acceleration.
So it will be giving you a comparison of exactly what speed you can expect to upload to a specific AWS region, such as US East 1, and then how that compares, uploading to that same region using accelerator transfer.
So we can see already this is demonstrating a 144% faster upload speed from my location to Northern Virginia by using accelerator transfer.
Now, I'm going to allow this to continue running while I continue talking, because what you'll see if you run this tool is a different set of results than what I'm seeing.
You'll see different benefits in each region of using accelerator transfer, depending on your home location.
So if you're located in San Francisco, for example, you probably won't see a great deal of difference between directly uploading and using accelerator transfer.
But for more distant regions, you'll see a much more pronounced improvement.
So if I just move down on this page and make these different regions a little bit easier to see, you'll note, for example, I achieve a much larger benefit in Oregon than I do in San Francisco.
And my result for Dublin, which is even further away from my current location, is a yet higher benefit for using accelerator transfer.
So the less optimal the network route is between your location and the region that's being tested, the better benefit you'll achieve by using accelerator transfer.
Now, there are quite a lot of AWS regions, so I'm not going to let this test finish, but I do recommend if you are interested in S3 performance and this feature specifically, you should test this from your internet connection and allow this process to finish, because it will give you a really good indication of what performance you can expect to each of the AWS regions, and then how S3 accelerator transfer will improve that performance.
Now, that is everything I wanted to cover in this lesson.
I know it's been a brief demo lesson, and it isn't really a demo lesson where you're doing anything practically, but I did just want to supplement the previous lesson by giving you a visual example of how this feature can improve performance to S3.
And I do hope you'll try this tool from your internet connection so you can see the benefit it provides from your location.
With that being said, though, that is everything that I wanted to cover, so go ahead, complete this video, and when you're ready, I'll see you in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and this time we're going to cover a few performance optimization aspects of S3.
If you recall from earlier in the course, this is the Animals For Life scenario.
We have a head office in Brisbane, remote offices which consume services from the Brisbane office, and remote workers using potentially slower or less reliable services to access and upload data to and from the head office.
So keep the scenario in mind as we step through some of the features that S3 offers to improve performance.
It's not always about performance.
It's often about performance and reliability combined.
And this is especially relevant when we're talking about a distributed organization such as Animals For Life.
So let's go ahead and review the features that S3 offers, which help us in this regard.
Now, understanding the performance characteristics of S3 is essential as a solutions architect.
We know from the Animals For Life scenario that remote workers need to upload large data sets and do so frequently.
And we know that they're often on unreliable internet connections.
Now, this is a concern because of the default way that S3 uploads occur.
By default, when you upload an object to S3, it's uploaded as a single blob of data in a single stream.
A file becomes an object, and it's uploaded using the put object API call and placed in a bucket.
And this all happens as a single stream.
Now, this method has its problems.
While it is simple, it means that if a stream fails, the whole upload fails, and the only recovery from it is a full restart of the entire upload.
If the upload fails at 4.5 GB of a 5 GB upload, that's 4.5 GB of data wasted and probably a significant amount of time.
Remember, the data sets are being uploaded by remote workers over slow and potentially unreliable internet links.
And this data is critical to the running of the organization.
Any delay can be costly and potentially risky to animal welfare.
When using this single put method, the speed and reliability of the upload will always be limited because of this single stream of data.
If you've ever downloaded anything online, it's often already using multiple streams behind the scenes.
There are many network-related reasons why even on a fast connection, one stream of data might not be optimal, especially if the transfer is occurring over long distances.
In this type of situation, single stream transfers can often provide much slower speeds than both ends of that transfer are capable of.
If I transfer you data with a single stream, it will often run much slower than my connection can do and your connection can do.
Remember, when transferring data between two points, you're only ever going to experience the speed, which is the lowest of those two points, but often using single stream transfer, you don't even achieve that.
Data transfer protocols such as BitTorrent have been developed in part to allow speedy distributed transfer of data.
And these have been designed to address this very concern.
Using data transfer with only a single stream is just a bad idea.
Now, there is a limit within AWS if you utilize a single put upload, then you're limited to 5 GB of data as a maximum.
But I would never trust a single put upload with anywhere near that amount of data.
It's simply unreliable.
But there is a solution to this.
And that solution is multi-part upload.
Multi-part upload improves the speed and reliability of uploads to S3.
And it does this by breaking data up into individual parts.
So we start with the original blob of data that we want to upload to S3, and we break this blob up into individual parts.
Now, there is a minimum.
The minimum size for using multi-part upload is 100 MB.
So the minimum size for this original blob of data is 100 megabytes.
You can't use multi-part upload if you're uploading data smaller than this.
Now, my recommendation is that you start using this feature the second that you can.
The most AWS tooling will automatically use it as soon as it becomes available, which is at this 100 MB lower threshold.
There are almost no situations where a single put upload is worth it when you get above 100 MB.
The benefits of multi-part upload are just too extensive and valuable.
Now, an upload can be split into a maximum of 10,000 parts.
And each part can range in size between 5 MB and 5 GB.
The last part is left over, so it can be smaller than 5 MB if needed.
Now, the reason why multi-part upload is so effective is that each individual part is treated as its own isolated upload.
So each individual part can fail in isolation and be restarted in isolation, rather than needing to restart the whole thing.
So this means that the risk of uploading large amounts of data to S3 is significantly reduced.
But not only that, it means that because we're uploading lots of different individual bits of data, it improves the transfer rate.
The transfer rate of the whole upload is the sum of all of the individual parts.
So you get much better transfer rates by splitting this original blob of data into smaller individual parts and then uploading them in parallel.
It means that if you do have any single stream limitations on your ISP or any network inefficiencies by uploading multiple different streams of data, then you more effectively use the internet bandwidth between you and the S3 endpoint.
Now, next, I want to talk about a feature of S3 called Accelerated Transfer.
To understand Accelerated Transfer, it's first required to understand how global transfer works to S3 buckets.
Let's use an example.
Let's say that the Animals for Life Organization has a European campaign which is running from the London office.
For this campaign, there'll be data from staff in the field.
And let's say that we have three teams dedicated to this campaign, one in Australia, one in South America, and one on the West Coast of the US.
Now, the S3 bucket, which is being used by the campaign staff, has been created in the London region.
So this is how this architecture locks.
We've got three geographically spread teams who are going to be uploading data to an S3 bucket that's located within the UK.
Now, it might feel like when you upload data to S3, your data would go in a relatively straight line, the most efficient line to its destination.
Now, this is not how networking works.
How networking works is that it is possible for the data to take a relatively indirect path.
And the data can often slow down as it moves from hop to hop on the way to its destination.
In some cases, the data might not be routed the way you expect.
I've had data, for instance, routed from Australia to the UK, but taking the alternative path around the world.
It's often not as efficient as you expect.
Remember, S3 is a public service, and it's also regional.
In the case of the Australian team, their data would have to transit across the public internet all the way from Australia to the UK before it enters the AWS public zone to communicate with S3.
And we have no control over the public internet data path.
Routers and ISPs are picking this path based on what they think is best and potentially commercially viable.
And that doesn't always align with what offers the best performance.
So using the public internet for data transit is never an optimal way to get data from source to destination.
Luckily, as Solutions Architects, we have a solution to this, which is S3 transfer acceleration.
Transfer acceleration uses the network of AWS edge locations, which are located in lots of convenient locations globally.
An S3 bucket needs to be enabled for transfer acceleration.
The default is that it's switched off, and there are some restrictions for enabling it.
The bucket name cannot contain periods, and it needs to be DNS compatible in its naming.
So keep in mind those two restrictions.
But assuming that's the case, once enabled, data being uploaded by our field workers, instead of going back to the S3 bucket directly, it immediately enters the closest, best performing AWS edge location.
Now this part does occur over the public internet, but geographically, it's really close, and it transits through fewer normal networks, so it performs really well.
At this point, the edge locations transit the data being uploaded over the AWS global network, a network which is directly under the control of AWS, and this tends to be a direct link between these edge locations and other areas of the AWS global network, in this case, the S3 bucket.
Remember, the internet is a global, multi-purpose network, so it has to have lots of connections to other networks, and many stops along the way, where traffic is routed from network to network, and this just slows performance down.
Think of the internet as the normal public transit network, when you need to transit from bus to train to bus to bus, to get to a far-flung destination.
The normal transit network, whilst it's not the highest performance, is incredibly flexible, because it allows you to get from almost anywhere to almost anywhere.
The internet is very much like that.
It's not designed primarily for speed.
It's designed for flexibility and resilience.
The AWS network, though, is purpose-built to link regions to other regions in the AWS network, and so this is much more like an express train, stopping at only the source and destination.
It's much faster and with lower consistent latency.
Now, the results of this, in this context, are more reliable and higher performing transfers between our field workers and the S3 bucket.
The improvements can vary, but the benefits achieved by using transfer acceleration improve the larger the distance between the upload location and the location of the S3 bucket.
So in this particular case, transferring data from Australia to a bucket located in Europe, you'll probably see some significant gains by using transfer acceleration.
The worse the initial connection, the better the benefit by using transfer acceleration.
Okay, so now it's time for a demonstration.
In the next lesson, I just want to take a few moments to show you an example of how this works.
I want to show you how to enable the feature on an S3 bucket, and then demonstrate some of the performance benefits that you can expect by using an AWS-provided tool.
So go ahead, finish this video, and when you're ready, you can join me in the demo lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I'm going to cover object versioning and MFA delete, two essential features of S3.
These are two things I can almost guarantee will feature on the exam and almost every major project I can involved in has needed solid knowledge of both.
So let's jump in and get started.
Object versioning is something which is controlled at a bucket level.
It starts off in a disabled state.
You can optionally enable versioning on a disabled bucket, but once enabled you cannot disable it again.
Just to be super clear, you can never switch bucket versioning back to disabled once it's been enabled.
What you can do though is suspend it and if desired a suspended bucket can be re-enabled.
It's really important for the exam to remember these stage changes.
So make a point of noting them down and when revising try to repeat until it sticks.
So a bucket starts off as disabled, it can be re-enabled again, an enabled bucket can be moved to suspended and then moved back to enabled.
But the important one is that enabled bucket can never be switched back to disabled.
That is critical to understand for the exam.
So you can see many trick questions which will test your knowledge on that point.
Without versioning enabled on a bucket, each object is identified solely by the object key, its name, which is unique inside the bucket.
If you modify an object, the original version of that object is replaced.
Versioning lets you store multiple versions of an object within a bucket.
Any operations which would modify an object, generate a new version of that object and leave the original one in place.
For example, let's say I have a bucket and inside the bucket is a picture of one of my cats, Winky.
So the object is called Winky.JPEG.
It's identified in the bucket by the key, essentially its name, and the key is unique.
If I modify the Winky.JPEG object or delete it, those changes impact this object.
Now there's an attribute of an object which I haven't introduced yet and that's the ID of the object.
When versioning on a bucket is disabled, the ID of the object in that bucket are set to null.
That's what versioning being off on a bucket means.
All of the objects have an ID of null.
Now if you upload or put a new object into a bucket with versioning enabled, then S3 allocates an ID to that object.
In this case, 111, 111.
If any modifications are made to this object, so let's say somebody accidentally overrides the Winky.JPEG object with the dog picture, but still calls it Winky.JPEG.
S3 doesn't remove the original object.
It allocates a new ID to the newer version and it retains the old version.
The newest version of any object in a version-enabled bucket is known as the current version of that object.
So in this case, the object called Winky.JPEG with an ID of 2222222, which is actually a dog picture, that is the current version of this object.
Now if an object is accessed without explicitly indicating to S3 which version is required, then it's always the current version which will be returned.
But you've always got the ability of requesting an object from S3 and providing the ID of a specific version to get that particular version back rather than the current version.
So versions can be individually accessed by specifying the ID, and if you don't specify the ID, then it's assumed that you want to interact with the current version, the most recent version.
Now versioning also impacts deletions.
Let's say we've got these two different versions of Winky.JPEG stored in a version-enabled bucket.
If we indicate to S3 that we want to delete the object and we don't give any specific version ID, then what S3 will do is try a new special version of that object known as a delete marker.
Now the delete marker essentially is just a new version of that object, so S3 doesn't actually delete anything, but the delete marker makes it look deleted.
In reality though, it's just hidden.
The delete marker is a special version of an object which hides all previous versions of that object.
But you can delete the delete marker which essentially undeletes the object, returning the current version to being active again, and all the previous versions of the object still exist, accessible using their unique version ID.
Now even with versioning enabled, you can actually fully delete a version of an object, and that actually really deletes it.
To do that, you just need to delete an object and specify the particular version ID that you want to remove.
And if you are deleting a particular version of an object and the version that you're deleting is the most recent version, so the current version, then the next most recent version of that object then becomes the current version.
Now some really important points that you need to be aware about object versioning.
I've mentioned this at the start of the lesson, it cannot be switched off, it can only be suspended.
Now why that matters is that when versioning is enabled on a bucket, all the versions of that object stay in that bucket, and so you're consuming space for all of the different versions of an object.
If you have one single object that's 5 gig in size, and you have five versions of that object, then that's 5 times 5 gig of space that you're consuming for that one single object, and it's multiple versions.
And logically, you'll build for all of those versions of all of those objects inside an S3 bucket, and the only way that you can zero those costs out is to delete the bucket and then re-upload all those objects to a bucket without versioning enabled.
That's why it's important that you can't disable versioning.
You can only suspend it, and when you suspend it, it doesn't actually remove any of those old versions, so you're still built for them.
Now there's one other relevant feature of S3 which does make it to the exam all the time, and that's known as MFA delete.
Now MFA delete is something that's enabled within the versioning configuration on a bucket.
And when you enable MFA delete, it means that MFA is required to change bucket versioning state.
So if you move from enable to suspend it or vice versa, you need this MFA to be able to do that, and also MFA is required to delete any versions of an object.
So to fully delete any versions, you need this MFA token.
Now the way that this works is that when you're performing API calls in order to change a bucket to versioning state or delete a particular version of an object, you need to provide the serial number of your MFA token as well as the code that it generates.
You concatenate both of those together, and you pass that along with any API calls to interact how you delete versions or change the versioning state of a bucket.
Okay, so that's all of the theory for object versioning inside S3.
And at this point, that's everything I wanted to cover in this license.
I'll go ahead and complete the video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back in this demo lesson you're going to gain some practical experience of working with the versioning feature of S3.
So to get started just make sure that you're logged in to the management account of the organization, so the general account, and then make sure that you've got the Northern Virginia region selected, so US-EAS-1.
Now there is a link attached to this lesson which you need to click on and then extract.
This is going to contain all of the files that you'll be using throughout this demo.
So go ahead and click on that link, extract it, and it should create a folder called S3_Versioning.
Once you've confirmed that you're logged in and have the right region selected, then go ahead and move to the S3 console.
So you can get there either using the recently visited services, or you can type S3 into the Find Services box and click to move to the S3 console.
Now to demonstrate versioning we're going to go ahead and create an S3 bucket, we're going to set it up for static website hosting, enable versioning, and then experiment with some objects and just observe how versioning changes the default behavior inside an S3 bucket.
So go ahead and click on Create bucket.
As long as the bucket name is unique, its specific name isn't important because we won't be using it with Route 53.
So just give the bucket a name and make sure that it's something unique.
So I'm going to use AC_Bucket_13337.
You should pick something different than me and different from something that any other student would use.
Once you've selected a unique bucket name, just scroll down and uncheck Block All Public Access.
We're going to be using this as a static website hosting bucket, so this is fine.
And we'll need to acknowledge that we understand the changes that we're making, so check this box, scroll down a little bit more, and then under bucket versioning we're going to click to enable versioning.
Keep scrolling down and at the bottom click on Create bucket.
Next, go inside the bucket, click on Properties, scroll all the way down to the bottom, and we need to enable static website hosting.
So click on Edit, check the box to enable static website hosting.
For hosting type, we'll set it to host a static website, and then for the index document, just type index.html, and then for the error document, type error.html.
Once you've set both of those, you can scroll down to the bottom and click on Save Changes.
Now as you learned in the previous demo lesson, just enabling static website hosting isn't enough to allow access, we need to apply a bucket policy.
So click on the permissions tab, scroll down, and under bucket policy click on Edit.
Now inside the link attached to this lesson, which you should have downloaded and extracted, there should be a file called bucket_policy.json, which is an example bucket policy.
So go ahead and open that file and copy the contents into your clipboard, move back to the console and paste it into the policy box, and we need to replace this example bucket placeholder with the ARN for this bucket.
So copy the bucket ARN into your clipboard by clicking this icon.
Because this ARN references objects in this bucket, and we know this because it's got forward slash star at the end, we need to replace only the first part of this placeholder ARN with the actual bucket ARN from the top.
So select from the A all the way up to the T, so not including the forward slash and the star, and then paste in the bucket ARN that you copied onto your clipboard.
Once you've done that, you can scroll down and then click on Save Changes.
Next, click on the objects tab, and we're going to upload some of the files that you downloaded from the link attached to this lesson.
So click on Upload, and first we're going to add the files.
So click on Add Files, then you'll need to go to the location where you downloaded and extracted the file that's attached to this lesson.
And once you're there, go into the folder called S3_Versioning, and you'll see a folder called Website.
Open that folder, select index.html and click on Open, and then click on Add Folder, and select the IMG folder that's also in that same location.
So select that folder and then click on Upload.
So this is going to upload an index.html object, and it's going to upload a folder called IMG which contains winky.jpeg.
Once you've done that, scroll down to the bottom and just click on Upload.
Now once the upload's completed, you can go ahead and click on Close, and what you'll see in the Objects dialog inside the bucket is index.html and then a folder called IMG.
And as we know by now, S3 doesn't actually have folders it uses prefixes, but if we go inside there, you'll see a single object called winky.jpeg.
Now go back to the bucket, and what we're going to do is click on Properties, scroll down to the bottom, and then click on this icon to open our bucket in a new browser tab.
All being well, you should see AnimalsForLife.org, Animal of the Week, and a picture of my one-eyed cat called winky.
So this is using the same architecture as the previous demo lesson where you experienced static website hosting.
What we're going to do now though is experiment with versions.
So go back to the main S3 console, scroll to the top, and click on Objects.
So because we've got versioning enabled on this bucket, as I talked about in the previous theory lesson, it means that every time you upload an object to this S3 bucket, it's assigned a version ID.
And if you upload an object with the same name, then instead of overwriting that object, it just creates a new version of that object.
Now with versioning enabled and using the default settings, we don't see all the individual versions, but we can elect to see them by toggling this Show Versions toggle.
So go ahead and do that.
Now you'll see that every object inside the S3 bucket, you'll see a particular version ID, and this is a unique code which represents this particular version of this particular object.
So if we go inside the IMG folder, you'll see that we have the same for winkeep.jpeg.
Toggle Show Versions to Disable, and you'll see that that version ID disappears.
What I want you to do now is to click on the Upload button inside this IMG folder.
So click on Upload, and then click on Add Files.
Now inside this Lessons folder, so S3 versioning, at the top level you've got a number of folders.
You have Website, which is what you uploaded to this S3 bucket, and this Image folder contains winkeep.jpeg.
So this is a particular file, winkeep.jpeg, that contains the picture of winkeep my one-eyed cat.
Now if you expand version 1 and version 2, you might be able to tell that version 1 is the same one-eyed cat, and we can expand that and say that it is actually winkeep.
Inside version 2 we have an object with the same name, but if we expand this, this is not winkeep, this is a picture of truffles.
So let's say that an administrator of this bucket makes a mistake and uploads this second version of winkeep.jpeg, which is not actually winkeep, it's actually truffles the cat.
But let's say that we do this, so we select winkeep.jpeg from the version 2 folder, and we click on Open.
Once we've selected that for upload, we scroll all the way down to the bottom and click on Upload.
That might take a few seconds to complete the upload because these are relatively large image files, but once it's uploaded you can click on Close.
So now we're still inside this image folder, and if we refresh, all we can see is one object, winkeep.jpeg.
So it looks with this default configuration of the user interface, like we've overwritten a previous object with this new object.
And if we go back to the tab which has got the static website open and hit refresh, you'll see that this image has indeed been replaced by the truffles image.
So even though it's called winkeep.jpeg, this is clearly truffles.
Now if we go back to the S3 console, and now if we enable the versions toggle, now we can see that we've got two different versions of this same object.
We've got the original version at the bottom and a new version at the top.
And note how both of these have different version IDs.
Now what S3 does is it always picks the latest version whenever you use any operations which simply request that one object.
So if we just request the object like we're doing with the static website hosting, then it will always pick the current or the latest version of this object.
But we do still have access to the older versions because we have versioning enabled on this bucket.
Nothing is ever truly deleted as long as we're operating with objects.
So let's experiment with exactly what functionality this gives us.
Go ahead and toggle show versions.
Once you've done that, select the winkeep.jpeg object and then click delete.
You'll need to type or copy and paste delete into this delete objects box and then click on delete.
Before we do that, note what it says at the top.
Deleting the specified objects adds delete markers to them.
If you need to undo the delete action, you can delete the delete markers.
So let's explore what this means.
Go ahead and click on delete objects.
And once it's completed, click on close.
Now how this looks at the moment, we're still in the image folder.
And because we've got show version set to off, it looks like we deleted the object.
But this is not what's occurred because we've got versioning enabled.
What's actually occurred is this is added a new version of this object.
But instead of an actual new version of the object, it's simply added a delete marker as that new version.
So if we toggle show versions back to on, now what we see are the previous versions of winkeep.jpeg.
So the original version at the bottom and the one that we replaced in the middle.
And then at the top we have this delete marker.
Now the delete marker is the thing which makes it look to be deleted in the console UI when we have show version set to off.
So this is how S3 handles deletions when versioning is enabled.
If you're interacting with an object and you delete that object, it doesn't actually delete the object.
It simply adds a delete marker as the most recent version of that object.
Now if we just select that delete marker and then click on delete, that has the effect of undeleting the object.
Now it's important to highlight that because we're dealing with object versions, anything that we do is permanent.
If you're operating with an object and you have versioning enabled on a bucket, if you overwrite it or delete it, all it's going to do is either add a new version or add a delete marker.
When you're operating with versions, everything is permanent.
So in this case we're going to be permanently deleting the delete marker.
So you need to confirm that by the typing or copying and pasting permanently delete into this box and click on delete objects.
What this is going to do is delete the delete marker.
So if we click on close, now we're left with these two versions of winkeep.jpeg so we've deleted the delete marker.
If we toggle show versions to off, we can see that we now have our object back in the bucket.
If we go back to static website hosting and refresh, we can see though that it's still truffle.
So this is a mistake.
It's not actually winky in this particular image.
So what we can do is go back to the S3 console, we can enable show versions.
We know that the most recent version is actually truffles rather than winky.
So what we can do is select this incorrect version, so the most recent version and select delete.
Now again, we're working with an object version.
So this is permanent.
You need to make sure that this is what you intend.
In our case it is.
So you need to either type or copy and paste permanently delete into the box and click on delete objects.
Now this is going to delete the most recent version of this object.
What happens when you do that is it makes the next most recent version of that object the current or latest version.
So now this is the original version of winky.jpeg, the one that we first uploaded to this bucket.
So this is now the only version of this object.
If we go back to the static website hosting tab and hit refresh, this time it loads the correct version of this image.
So this is actually winky my one-eyed cat.
So this is how you can interact with versioning in an S3 bucket.
Whenever it's enabled, it means that whenever you upload an object to the same name instead of overwriting, it simply creates a new version.
Whenever you delete an object, it simply adds a delete marker.
When you're operating with objects, it's always creating new versions or adding delete markers.
But when you're working with particular versions rather than objects, any operations are permanent.
So you can actually delete specific versions of an object permanently and you can delete delete markers to undelete that object.
Now it's not possible to turn off versioning on a bucket.
Once it's enabled on that bucket, you don't have the ability to disable it.
You only have the ability to suspend it.
Now when you suspend it, it stops new versions being created, but it does nothing about the existing versions.
The only way to remove the additional costs for a version-enabled bucket is either to delete the bucket and then reload the objects to a new bucket, or go through the existing bucket and then manually purge any specific versions of objects which aren't required.
So you need to be careful when you're enabling versioning on a bucket because it can cause additional costs.
If you have a bucket where you're uploading objects over and over again, specifically of their large objects, then if you have versioning enabled, you can incur significantly higher costs than if you have a bucket which doesn't have a versioning enabled.
So that's something you need to keep in mind.
If you enable versioning, you need to manage those versions of those objects inside the bucket.
With that being said, let's tidy up.
So let's go back to the main S3 console, select the bucket, click on Empty, copy and paste or type "Permanently Delete" and click on Empty.
When it's finished, click on Exit, and with the bucket still selected, click on Delete.
Copy and paste or type the name of the bucket and confirm it with the delete bucket.
I want you to build out the accounties back in the same state as it was at the start of this demo lesson.
Now at this point, that's everything that I want you to do in this demo lesson.
You've gained some practical exposure with how to deal with object versions inside an S3 bucket.
At this point, go ahead and complete this video, and when you're ready, I'll afford you joining me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
And in this demo lesson, you're going to get some experience using the S3 static website hosting feature, which I talked about in the previous lesson.
Now, to get started, just make sure that you're logged in to the management account of the organization, and that you're using the IAM Admin user.
So this just makes sure that you have admin permissions over the general or management account of the organization.
Also, make sure that you have the Northern Virginia region selected, which is US-EAST-1.
Normally with S3, when you're interacting with the product, you're doing so using the AWS console UI or the S3 APIs.
And in this demo lesson, you'll be enabling a feature which allows S3 to essentially operate as a web server.
It allows anybody with a web browser to interact with an S3 bucket, load an index page, and load pictures or other media that are contained within that bucket using standard HTTP.
So that's what we're going to do.
And to get started, we need to move across to the S3 console.
So either use S3 in recently visited services, or you can click on the services dropdown, type S3, and then click it in this list.
Now that we're at the S3 console, we're going to create an S3 bucket.
Now, if you chose to register a domain earlier in the course, like I did, so I registered animalsforlife.io, then we're going to connect this S3 bucket with the custom domain that we registered so we can access it using that domain.
If you chose not to use a domain, then don't worry, you can still do this demo.
What you need to do is to go ahead and click on Create a bucket.
Now for the bucket name, if you are not using a custom domain, then you can enter whatever you want in this bucket name as long as it's unique.
If you did register a custom domain and you want to use this bucket with that domain, then you need to enter a DNS formatted bucket name.
So in my case, I'm going to create a bucket which is called Top 10.
It's going to store the world's best cappages, the Top 10 cappages in the world ever.
And it's going to be part of the animalsforlife.io domain.
And so at the end of this, I'm going to add dot and then animalsforlife.io.
And if you've registered your own custom domain, then obviously you need to add your own domain at the end.
You can't use the same name as me.
Once you've entered that name and just scroll down and uncheck Block All Public Access, this is a safety feature of S3.
But because we're intentionally creating an S3 bucket to be used as a static website, we need to uncheck this box.
Now, unchecking this box means that you will be able to grant public access.
It doesn't mean that public access is granted automatically when you uncheck this box.
They're separate steps.
You will, though, need to acknowledge that you understand the risks of unticking that box.
So check this box just to confirm that you understand.
We'll be carefully configuring the security so you don't have to worry about any of those risks.
And once you've set that, we can leave everything else as default.
So just scroll all the way down to the bottom and click on Create Bucket.
So the bucket's been created, but right now, this only allows access using the S3 APIs or the console UI.
So we need to enable static website hosting.
Now, to do that, we're going to click on the bucket.
Once we have the bucket open, we're going to select the Properties tab.
On the Properties tab, scroll all the way down to the bottom.
And right at the very bottom, we've got static website hosting.
And you need to click on the Edit button next to that.
It's a simple yes or no choice at this point.
So check the box to enable static website hosting.
There are a number of different types of hosting.
You can either just host a static website, which is what we'll choose, or you can redirect requests for an object.
So this allows you to redirect to a different S3 bucket.
We'll be covering this later in the course for now.
Just leave this selected.
So host a static website.
Now, in order to use the static website hosting feature, you'll need to provide S3 with two different documents.
The index document is used as the home or default page for the static website hosting.
So if you don't specify a particular object when you're browsing to the bucket, for example, winky.jpg, if you just browse to the bucket itself, then the index document is used.
And we're going to specify index.html.
So this means that an object called index.html will be loaded if we don't specify one.
Now, the error document is used whenever you have any errors.
So if you specify that you want to retrieve an object from the bucket, which doesn't exist, the error document is used.
And for the error document, we're going to call this error.html.
So these two values always need to be provided when you enable static website hosting.
So now we've provided those, we can scroll down and click on save changes.
Now that that feature's enabled, if we just scroll all the way down to the bottom, you'll see that we have a URL for this bucket.
So go ahead and copy that into your clipboard.
We're going to need this shortly.
So this is the URL that you'll use by default to browse to this bucket.
Now, next, what we need to do is to upload some objects to the bucket, which this static website hosting feature is going to use.
Now, to do that, scroll all the way to the top and just click on objects and then click on upload.
So this is the most recent UI version for S3.
And so you have the ability to add files or add folders.
Now, we're going to use both of these.
We're going to use the add files button to add the index.html and the error.html.
And we're going to use the add folder to add a folder of images.
So first, let's do the add files.
So click on add files.
Now, attached to this video is a link which downloads all of the assets that you'll need for this demo.
So go ahead and click on that link to download the zip file and then extract that zip file to your local machine.
And you'll need to move to the folder that you extracted from the zip file.
It should be called static_website_hosting.
So go to that folder.
And then again, there should be a folder in there called website_files.
So go ahead and click on there to go into that folder.
Now, there are three things inside this folder, index.html, error.html and img.
So we'll start by uploading both of these HTML documents.
So select index.html and error.html and then click on open.
And that will add both of these to this upload table.
Next, click on add folder and then select the img folder and click on upload.
So this has prepared all of these different objects ready to upload to this S3 bucket.
If we scroll down, we'll see that the destination for these uploads is our S3 bucket and your name here will be different as long as it's the same as the name you picked for the bucket, that's fine.
Go all the way to the bottom and then go ahead and click on upload.
And that will upload the index.html, the error.html and then the folder called img as well as the contents of that folder.
So at this point, that's all of the objects uploaded to the S3 bucket and we can go ahead and click on close.
So now let's try browsing to this bucket using static website hosting.
So go ahead and click on properties, scroll all the way down to the bottom and here we've got the URL for this S3 bucket.
So go ahead and copy this into your clipboard, open a new tab and then open this URL or click on this symbol to open it in a new tab.
What you'll see is a 403 forbidden error and this is an access denied.
You're getting this error because you don't have any permissions to access the objects within this S3 bucket.
Remember, S3 is private by default and just because we've enabled static website hosting doesn't mean that we have any permissions to access the objects within this S3 bucket.
We're accessing this bucket as an anonymous or unauthenticated user.
So we have no method of providing any credentials to S3 when we're accessing objects via static website hosting.
So we need to give permissions to any unauthenticated or anonymous users to access the objects within this bucket.
So that's the next thing we need to do.
We need to grant permissions to be able to read these objects to any unauthenticated user.
So how do we do that?
The third method is to use a bucket policy.
So that's what I'm gonna demonstrate in order to grant access to these objects.
Now to add a bucket policy, we need to select the permissions tab.
So click on permissions and then below block public access, there's a box to specify a bucket policy.
So click on edit and we need to add a bucket policy.
Now also in the folder that you extracted from this lessons zip file is a file called bucket_policy.json and this is a generic bucket policy.
So this bucket policy has an effect of allow and it applies to any principle because we have this star wild card and because the effect is allow, it grants any principle, the ability to use the S3 get object action which allows anyone to read an object inside an S3 bucket and it applies to this resource.
So this is a generic template, we need to update it, but go ahead and copy it into your clipboard, go back to the S3 console and paste it into this box.
Now we need to replace this generic ARN, so this example bucket ARN.
So what I want you to do is to copy this bucket ARN at the top of the screen.
So copy this into your clipboard and we need to replace part of this template ARN with what we've just copied.
Now an important point to highlight is that this ARN has forward slash star on the end because this ARN refers to any objects within this S3 bucket.
So we need to select only the part before the forward slash.
So starting at the A and then ending at the end of example bucket and then just paste in the ARN at our bucket that we just copied into our clipboard.
What you should end up with is this full ARN with the name of the bucket that you created and then forward slash star.
And once you've got that, go ahead and click on save changes.
This applies a bucket policy which allows any principle, so even unauthenticated principles, the ability to get any of the objects inside this bucket.
So this means that any principle will be able to read objects inside this bucket.
At this point, assuming everything's okay, if you've still got the tab open to the bucket, then go back to that tab and hit refresh.
And what you should see is the top 10 animals in the world.
So position number one, we've got Merlin.
At position number two, we've got Merlin again.
Position number three, another Merlin.
Four, still Merlin.
And then Merlin again at number five.
At number six, we've got Boris.
So the token non Merlin cat.
Number seven, Samson, another token non Merlin cat.
And then number eight, we've got different cat one.
He looks quite a lot like Merlin.
Number nine, different cat two, again, kind of looks like Merlin.
And then number 10, we've got the family.
And then you might not have guessed this, but this entire top 10 contest was judged by, you guessed it, Merlin.
So what you're loading here is the index.html document inside the bucket.
So we haven't specified an object to load.
And because of that, it's using the index document that we specified on the bucket.
We can load the same object by typing specifically index.html on the end, and that will load in the same object.
Now, if we specify an object which doesn't exist, so let's say we used wrong index.html, then instead of the index document, now it's going to load the error document.
So this is the error document that you specified, which is loading error.html.
So this is just an example of how you can configure an S3 bucket to act as a standard static website.
So what it's doing is loading in the index.html object inside the bucket.
And that index.html is loading in images, which are also stored in the bucket.
So if I right click and copy the image location and open this in a new tab, this is essentially just loading this image from the same S3 bucket.
So it's loading it from this folder called img, and it's called Merlin.jpeg.
It's just an object loading from within the bucket.
Now if I go back to the S3 console and just move across to the properties tab and then scroll down, so far in this lesson, you've been accessing this bucket using the bucket website endpoint.
So this is an endpoint that's derived from the name of the bucket.
Now your URL will be different because you will have called your bucket name something else.
Now if you chose to register a custom domain name at the start of this course, you can customize this further.
As long as you call the bucket the same as the DNS name that you want to use, you can actually use Route 53 to assign a custom DNS name for this bucket.
So this part of the demo you'll only be able to do if you've registered a domain within Route 53.
If you haven't, you can skip to the end of this demo where we're going to tidy up.
But if you want to customize this using Route 53, then you can click on the services dropdown and type Route 53 and then click to move to the Route 53 console.
Once you're there, you can click on hosted zones and you should have a hosted zone that matches the domain that you registered at the start of the course.
Go inside that and click on create record.
Now we're going to be creating a simple routing record.
So make sure that's selected and then click on next.
And we're going to define a simple record.
Now I'm going to type the first part of the name of the bucket.
So I used top10.animalsforlive.io is my bucket name.
So I'm going to put top 10 in this box.
Now, because we want to point this at our S3 bucket, we need to choose an endpoint in this dropdown.
So click in this dropdown and then scroll down and we're going to pick alias to S3 website endpoint.
So select that.
Next, you need to choose the region and you should have created the S3 bucket in the US East 1 region because this is the default for everything that we do in the course.
So go ahead and type US-EAS-1 and then select US East Northern Virginia and you should be able to click in enter S3 endpoint and select your bucket name.
Now, if you don't see your bucket here, then either you've picked the wrong region or you've not used the same name in this part of the record name as you picked for your bucket.
So make sure this entire name, so this component plus the domain that you use matches the name that you've selected for the bucket.
Assuming it does, you should be able to pick your bucket in this dropdown.
Once you've selected it, go ahead and click on define simple record.
And once that's populated in the box, click on create records.
Now, once this record's created, you might have to wait a few moments, but you should find that you can then open this bucket using this full DNS name.
So there we go.
It opens up the same bucket.
So we've used Route 53 and we've integrated it using an alias to our S3 endpoint.
Now, again, you can only do this if you create a bucket with the same name as the fully qualified domain name that we just configured.
So this is an example of a fully qualified domain name.
Now, this is the host component of DNS and this is the domain component.
So together they make up a fully qualified domain name and for this to work, you need to create an S3 bucket with the same bucket name as this fully qualified domain name.
And that's what I did at the start of this lesson, which is why it works for me.
And as long as you've done the same, as long as you've registered a custom domain, as long as you've called the bucket the same as what you're creating within Route 53, then you should be able to reference that bucket and then access it using this custom URL.
At this point, we're going to tidy up.
So go back to the Route 53 console and select this record that you've created and then click on delete.
You'll need to confirm it by clicking delete again.
Then we need to go back to the S3 console, select the bucket that you've created, click on empty, and you'll need to either type or copy and paste, permanently delete into this box, and then click on empty.
It'll take a few minutes to empty the bucket.
Once it's completed, click on exit.
And with the bucket still selected, click on delete to delete the bucket.
And you'll need to confirm that by either typing or copy and pasting the name of the bucket and then click delete bucket.
Now, at this point, that's everything that you need to do in this lesson.
It's just an opportunity to experience the theory that you learned in the previous lesson.
Now, there's a lot more that you can do with static website hosting and I'll be going into many more complex examples later on in the course.
But for now, this is everything that you need to do.
So go ahead and complete this video.
And when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to talk about a feature of S3 which I use all the time, personally and when I'm doing consulting work for clients. That is S3 static website hosting.
Until this point, we've been accessing S3 via the normal method which is using the AWS APIs. You might not have realized that, but that's how the AWS CLI tools and the console UI work behind the scenes. For instance, to access any objects within S3, we're using the S3 APIs, and assuming we're authenticated and authorized, we use the get object API call to access those resources.
Now, accessing S3 using APIs is useful in certain situations because it's secure and flexible. But using static website hosting can make S3 infinitely more useful because it allows access via standard HTTP, like individuals using a web browser. So, you can use it to host almost anything, for example, a simple blog.
Using static website hosting is pretty simple. You enable it, and in doing so, you have to set an index document and an error document. When you're using a website, if you access a particular page, say, for example, cats_are_amazing.html, then you will get access specifically to that page. If you don't specify a page, for example, netflix.com, you get what's called an index page, which is a default page returned to you when you aren't asking for anything specific. This is the entry point to most websites.
So, when enabling static website hosting on an S3 bucket, we have to point the index document at a specific object in the S3 bucket. The error document is the same, but it's used when something goes wrong. So, if you access a file which isn't there or there is another type of server-side error, that's when the error document is shown.
Now, both of these need to be HTML documents because the static website hosting feature delivers HTML files. When you enable this feature on a bucket, AWS creates a static website hosting endpoint, and this is a specific address that the bucket can be accessed from using HTTP. The exact name of this endpoint is influenced by the bucket name that you choose and the region that the bucket is in. You don't get to select this name; it's automatically generated by those two things.
Now, you can use your own custom domain for a bucket, but if you do want to do that, then your bucket name matters. You can only use a custom domain with a bucket if the name of the bucket matches the domain. So, if I wanted to have a website called top10.animalsforlife.org, then my bucket name would need to be called top10.animalsforlife.org, and that's why I mentioned at the start of the course to get into the habit of reserving your website names by creating S3 buckets using those names.
Static website hosting is great for things like hosting static websites such as blogs, but it's also good for other things. Let's take a look at two common examples. There are two specific scenarios which are perfect for S3: Offloading and out-of-band pages.
With offloading, let's say you have a website which has a top 10 leaderboard of all of the best animals, and it runs on a computer service. Let's assume this is easy too, for now. The computer service does a few things: it delivers a dynamic HTML page and it delivers static media, in this example, images. That dynamic HTML page might need access to a database, so that's not suitable for static S3 hosting, but the static media that's sitting there waiting to be delivered, and in most cases, it probably makes up over 95% of the data volume that the computer service is delivering and likely almost all of the storage space.
Computer services tend to be relatively expensive, so we can offload a lot of this to S3. What we can do is we can take all of the images, so all of the media that the computer service hosts, and we can move that media to an S3 bucket that uses static website hosting, so one which has static website hosting enabled. Then, when the computer service generates the HTML file and delivers this to the customer's browser, this HTML file points at the media that's hosted on the S3 bucket, so the media is retrieved from S3, not the computer service. S3 is likely to be much cheaper for the storage and delivery of any media versus computer service. S3 is custom-designed for the storage of large data at scale, and so generally, whenever you've got an architecture such as this, you should always consider offloading any large data to S3.
Now, the other benefit that I wanted to specifically highlight is out-of-band pages. Now, out-of-band is an old telecommunications term. In an IT context, it generally means a method of accessing something that is outside of the main way. So, for example, you might use out-of-band server management, and this lets you connect to a management card that's in a server using the cellular network. That way, if the server is having networking issues with the normal access methods of the normal network, then you can still access it.
In the context of this example, top10.animalsforlife.org might refer to an error or a status notification system. For example, with this example, the favorite animals page for Animals for Life. If that service was hosted on a computer service such as EC2, and we wanted a maintenance page to show during scheduled or unscheduled maintenance periods, it wouldn't make much sense to have this on the same server because if the server is being worked on, then it's inherently offline. Additionally, putting it on a different EC2 instance is also risky because if EC2 itself has issues, it might not let us show a status page.
So what we do is we use out-of-band pages in this context and another service. If the server was offline for maintenance or it was experiencing stability or performance bugs, then we could change our DNS and point customers at a backup static website hosted on S3, and this could provide a status message or maybe the details for our business's support team.
Now, the pricing structure for S3, once you understand it, is very simple, but it's formed of a number of major components. First, we've got the cost to store data on S3, and this is generally expressed as a per-gigabyte-month fee. So to store a gigabyte of data on S3 for one month, there's a certain cost, and if you store data for less than one month, then you only pay that component, and if you store less than one gig, you only pay that component, so it's a per-gig-month charge.
Now, there's also a data transfer fee, so for every gigabyte of data that you transfer in and out of S3, there's a cost. Now, to transfer data into S3 is always free, so you're never charged for transferring data into S3. To transfer data out of S3, there is a per-gigabyte charge. Now, for storage and data transfer, they are both incredibly cheap. It's sort of the cheapest storage that's available, especially if you're storing large amounts of data, but there is also a third component that you do need to be aware of, and this is especially important when you're using static website hosting, and this is that you're charged a certain amount for requesting data. So, every time you perform an operation, every time you get, every time you list, every time you put, that's classed as an operation, and different operations in S3 have different costs per 1,000 operations.
Now, the reason I mention this is if you're using static website hosting, you're generally not going to store a lot of data. You're also generally not going to transfer a lot of data because what's stored in the S3 bucket is likely to be very small, but if you have a large customer base and if this out-of-band website or this offloading bucket is actually being used heavily by your system, then you could be using a lot of requests, and so you need to be aware of the request charge per S3.
Now, in terms of what's provided in the free tier, you're given 5 GB of monthly storage inside S3. You're allowed 20,000 GET requests and 2,000 PUT requests, so that will cover us for the demo lesson that we're going to do inside this course and probably most of the other activities that we'll do throughout the course, but if you're going to use S3 for any real-world usage, then you will be billed for that usage.
Now, I run a personal blog, control.io, and that runs from S3 using the static website hosting feature, and because I post certification articles, it does get some fairly heavy use. Now, in the entire time that I've run my personal blog, I think the most that I've ever been charged for S3 usage is 17 cents in one month. So when I talk about being charged for S3, I'm going to mention it whenever we go beyond this free tier, but keep in mind that often I'm talking about really tiny amounts of money relative to the value that you're getting. So, you can store a lot in S3 and use it to deliver a lot of data and often be charged a really tiny amount of money, often something that isn't noticeable on the bill of a production aid of the US account.
Okay, that's enough on the theory of static website hosting, so now it's time for a demo, and in this demo, we're going to be using S3 to create a simple static website. Now, I think this demo is going to be a useful one because it brings together a few of the theory concepts that I've been talking about over the last few lessons. So, go ahead, mark this video as complete, and when you're ready, you can join me in the next demo lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to start talking about S3 security in more detail. Starting with bucket policies which are a type of AWS resource policy. So by now you know the drill, let's jump in and get started.
Now before we start I want to repeat one thing and you have heard me say this before, but I'm going to say it again over and over. S3 is private by default. Everything that we can do to control S3 permissions is based on this starting point. The only identity which has any initial access to an S3 bucket is the account root user of the account which owns that bucket, so the account which created it. Anything else, so any other permissions have to be explicitly granted. And there are a few ways that this can be done.
The first way is using an S3 bucket policy. And an S3 bucket policy is a type of resource policy. A resource policy is just like an identity policy, but as the name suggests, they're attached to resources instead of identities, in this case an S3 bucket. Resource policies provide a resource perspective on permissions. The difference between resource policies and identity policies is all about this perspective. With identity policies you're controlling what that identity can access. With resource policies you're controlling who can access that resource. So it's from an inverse perspective. One is identities and one is resources.
Now identity policies have one pretty significant limitation. You can only attach identity policies to identities in your own account. And so identity policies can only control security inside your account. With identity policies you have no way of giving an identity in another account access to an S3 bucket. That would require an action inside that other account. Resource policies allow this. They can allow access from the same account or different accounts because the policy is attached to the resource and it can reference any other identities inside that policy. So by attaching the policy to the resource and then having flexibility to be able to reference any other identity, whether they're in the same account or different accounts, resource policies therefore provide a great way of controlling access for a particular resource, no matter what the source of that access is.
Now think about that for a minute because that's a major benefit of resource policies, the ability to grant other accounts access to resources inside your account. They also have another benefit, resource policies can allow or deny anonymous principals. Identity policies by design have to be attached to a valid identity in AWS. You can't have one attached to nothing. Resource policies can be used to open a bucket to the world by referencing all principals, even those not authenticated by AWS. So that's anonymous principals. So bucket policies can be used to grant anonymous access.
So two of the very common uses for bucket policies are to grant access to other AWS accounts and anonymous access to a bucket. Let's take a look at a simple visual example of a bucket policy because I think it will help you understand how everything fits together. There's a demo lesson coming up soon where you'll implement one as part of the mini project. So you will get some experience soon enough of how to use bucket policies.
Let's say that we have an AWS account and inside this account is a bucket called Secret Cat Project. Now I can't say what's inside this bucket because it's a secret, but I'm sure that you can guess. Now attached to this bucket is a bucket policy. Resource policies have one major difference to identity policies and that's the presence of an explicit principal component. The principal part of a resource policy defines which principals are affected by the policy. So the policy is attached to a bucket in this case, but we need a way to say who is impacted by the configuration of that policy. Because a bucket policy can contain multiple statements, there might be one statement which affects your account and one which affects another account, as well as one which affects a specific user, the principal part of a policy or more specifically the principal part of a statement in a policy defines who that statement applies to, which identity is which principals.
Now in an identity policy this generally isn't there because it's implied that the identity which the policy is applied to is the principal. That's logical right? Your identity policy by definition applies to you so you are the principal. So a good way of identifying if a policy is a resource policy or an identity policy is the presence of this principal component. If it's there it's probably a resource policy. In this case the principal is a wild card, a star, which means any principal. So this policy applies to anyone accessing the S3 bucket.
So let's interpret this policy. Well first the effect is allow and the principal is star, so any principal. So this effect allows any principal to perform the action S3 get object on any object inside the secret cat project S3 bucket. So in effect it allows anyone to read any objects inside this bucket. So this would equally apply to identities in the same AWS account as the bucket. It could also apply to other AWS accounts, or partner account. And crucially it also applies to anonymous principals. So principals who haven't authenticated to AWS. Bucket policies should be your default thought when it comes to granting anonymous access to objects in buckets and they're one way of granting external accounts that same access. They can also be used to set the default permissions on a bucket. If you want to grant everyone access to Boris's picture for example and then grant certain identities extra rights or even deny certain rights then you can do that. Bucket policies are really flexible. They can do many other things.
So let's quickly just look at a couple of common examples. Bucket policies can be used to control who can access objects even allowing conditions which block specific IP addresses. In this example this bucket policy denies access to any objects in the secret cat project bucket unless your IP address is 1.3.3.7. The condition block here means this statement only applies if this condition is true. So if your IP address, the source IP address is not 1.3.3.7, then the statement applies and access is denied. If your IP address is 1.3.3.7, then this condition is not met because it's a not IP address condition. So if your IP address is this IP address, the condition is not matched and you get any other access that's applicable. Essentially this statement, which is a deny, does not apply.
Now, bucket policies can be much more complex. In this example, one specific prefix in the bucket, remember this is what a folder really is inside a bucket, so one specific prefix called Boris is protected with MFA. It means that accesses to the Boris folder in the bucket are denied if the identity that you're using does not use MFA. The second statement allows read access to objects in the whole bucket. Because an explicit deny overrides an allow, the top statement applies to just that specific prefix in the bucket, so just Boris. Now, I won't labour on about bucket policies because we'll be using them a fair bit throughout the course, but they can range from simple to complex. I will include a link in the lesson description with some additional examples that you can take a look through if you're interested.
In summary though, a resource policy is associated with a resource. A bucket policy, which is a type of resource policy, is logically associated with a bucket, which is a type of resource. Now, there can only be one bucket policy on a bucket, but it can have multiple statements. If an identity inside one AWS account is accessing a bucket, also in that same account, then the effective access is a combination of all of the applicable identity policies plus the resource policy, so the bucket policy. For any anonymous access, so access by an anonymous principal, then only the bucket policy applies, because logically, if it's an anonymous principal, it's not authenticated and so no identity policies apply.
Now, if an identity in an external AWS account attempts to access a bucket in your account, your bucket policy applies as well as anything that's in their identity policies. So there's a two-step process if you're doing cross-account access. The identity in their account needs to be able to access S3 in general and your bucket, and then your bucket policy needs to allow access from that identity, so from that external account.
Now, there is another form of S3 security. It's used less often these days, but I wanted to cover it anyway. Access control lists or ACLs are ways to apply security to objects or buckets. There is a sub-resource of that object or of that bucket. Remember in the S3 introduction lesson earlier in the course, I talked about sub-resources. Well, this is one of those sub-resources. Now, I almost didn't want to talk about ACLs because they are legacy. AWS don't even recommend their use and prefer that you use bucket policies or identity policies. But as a bare minimum, I want you to be aware of their existence.
Now, part of the reason that they aren't used all that often and that bucket policies have replaced much of what they do is that they're actually inflexible and only allow very simple permissions. They can't have conditions like bucket policies and so you're restricted to some very broad conditions. Let me show you what I mean. This is an example of what permissions can be controlled using an ACL. Now, apologies for the wall of text, but I think it's useful to visualize it all at once. There are five permissions which can be granted in an ACL. Read, write, readACP, writeACP and full control. That's it. So it's already significantly less flexible than an identity or a resource policy. What these five things do depend on if they're applied to a bucket or an object. Read permissions, for example, on a bucket allow you to list all objects in that bucket, whereas write permissions on a bucket allow the grantee, which is the principal being granted those permissions, the ability to overwrite and delete any object in that bucket. Read permissions on an object allow the grantee just to read the object specifically as well as its metadata.
Now, with ACLs you either configure an ACL on the bucket, or you configure the ACL on an object. But you don't have the flexibility of being able to have a single ACL that affects a group of objects. You can't do that. That's one of the reasons that a bucket policy is significantly more flexible. It is honestly so much less flexible than a bucket policy to the extent where I won't waste your time with it anymore. It's legacy, and I suspect at some point it won't be used anymore. If there are any specific places in the course which do require knowledge of ACLs, I'll mention it. Otherwise, it's best to almost ignore the fact that they exist.
Now, before we finish up, one final feature of S3 permissions, and that's the block public access settings. In the overall lifetime of the S3 product, this was actually added fairly recently, and it was added in response to lots of public PR disasters where buckets were being configured incorrectly and being set so that they were open to the world. This resulted in a lot of data leaks, and the root cause was a mixture of genuine mistakes or administrators who didn't fully understand the S3 permissions model.
So consider this example, an S3 bucket with resource permissions granting public access. Until block public access was introduced, if you had public access configured, the public could logically access a bucket. Public access in this sense is read only to any objects defined in a resource policy on a bucket, so there's no restrictions. Public access is public access. Block public access added a further level of security, another boundary. And on this boundary is the block public access settings, which apply no matter what the bucket policies say, but they apply to just the public access, so not any of the defined AWS identities. So these settings will only apply to an anonymous principal, somebody who isn't an AWS identity, attempt to access a bucket using these public access configurations.
Now these settings can be set when you create the bucket and adjusted afterwards. They're pretty simple to understand. You can choose the top option which blocks any public access to the bucket, no matter what the resource policy says. It's a full override, a failsafe. Or you can choose the second option which allows any public access granted by any existing ACLs when you enable the setting but it blocks any new ones. The third option blocks any public access granted by ACLs no matter if it was enabled before or after the block public access settings were enabled. The fourth setting allows any existing public access granted by bucket policies or access point policies so anything enabled at the time when you enable this specific block public access setting, they're allowed to continue but it blocks any new ones. The fifth option blocks both existing and new bucket policies from granting any public access.
Now they're simple enough and they function as a final failsafe. If you're ever in a situation where you've granted some public access and it doesn't work, these are probably the settings which are causing that inconsistency. And don't worry, I'll show you where these are accessed in the demo lesson.
Now before we finish up, just one final thing I want to cover and this is an exam at PowerUp. So these are just some key points on how to remember all of the theory that I've discussed in this lesson. When I first started in AWS, I found it hard to know from instinct when to use identity policies versus resource policies versus ACLs. Choosing between resource policies and identity policies much of the time is a preference thing. So do you want to control permissions from the perspective of a bucket or do you want to grant or deny access from the perspective of the identities accessing a bucket? Are you looking to configure one user accessing 10 different buckets or 100 users accessing the same bucket? It's often a personal choice. A choice on what makes sense for your situation and business. So there's often no right answer but there are some situations where one makes sense over the other.
If you're granting or denying permissions on lots of different resources across an AWS account, then you need to use identity policies because not every service supports resource policies. And besides, you would need a resource policy for each service so that doesn't make sense if you're controlling lots of different resources. If you have a preference for managing permissions all in one place, that single place needs to be IAM, so identity policies would make sense. IAM is the only single place in AWS you can control permissions for everything. You can sometimes use resource policies but you can use IAM policies all the time. If you're only working with permissions within the same account so no external access, then identity policies within IAM are fine because with IAM you can only manage permissions for identities that you control in your account. So there are a wide range of situations where IAM makes sense and that's why most permissions control is done within IAM. But there are some situations which are different. You can use bucket policies or resource policies in general if you're managing permissions on a specific product. So in this case S3. If you want to grant a single permission to everybody accessing one resource or everybody in one account, then it's much more efficient to use resource policies to control that base level permission. If you want to directly allow anonymous identities or external identities from other AWS accounts to access a resource, then you should use resource policies.
Now finally, and I know this might seem like I'm anti-access control list, which is true, but so are AWS, never use ACLs unless you really need to. And even then, consider if you can use something else. At this point in time, if you are using an ACL, you have to be pretty certain that you can't use anything else because they're legacy and their inflexible and AWS are actively recommending against their use. So keep that in mind.
Okay, well that's all of the theory that I wanted to cover in this lesson. I know it's been a lot, but we do have to cover this detailed level of security because it's needed in the exam. And you'll be using it constantly throughout the rest of this section and the wider course. At this point, though, go ahead and complete this video. And when you're ready, you can join me in the next where I'm going to be talking about another exciting feature of S3.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this lesson, where I'm going to very briefly talk about a special type of IAM role, and that's service-linked roles.
Now, luckily there isn't a great deal of difference between service-linked roles and IAM roles. They're just used in a very specific set of situations. So let's jump in and get started.
So simply put, a service-linked role is an IAM role linked to a specific AWS service. They provide a set of permissions which is predefined by a service. These permissions allow a single AWS service to interact with other AWS services on your behalf.
Now, service-linked roles might be created by the service itself, or the service might allow you to create the role during the setup process of that service. Service-linked roles might also get created within IAM.
The key difference between service-linked roles and normal roles is that you can't delete a service-linked role until it's no longer required. This means it must no longer be used within that AWS service. So that's the one key difference.
In terms of permissions needed to create a service-linked role, here's an example of a policy that allows you to create a service-linked role.
You'll notice a few key elements in this policy. The top statement is an allow statement. The action is iam:CreateServiceLinkedRole. For the resource, it has SERVICE-NAME.amazonaws.com.
The important thing here is not to try to guess this, as different services express this in different ways. The formatting can differ, and it's case-sensitive. I've included a link with an overview of these details attached to this lesson.
When creating this type of policy to allow someone to create service-linked roles, you have to be careful to ensure you do not guess this element of a statement.
Another important consideration with service-linked roles is role separation. When I talk about role separation, I'm not using it in a technical sense, but in a job role sense.
Role separation is where you might give one group of people the ability to create roles and another group the ability to use them. For instance, we might want to give Bob, one of our users, the ability to use a service-linked role with an AWS service.
This involves using the architecture of being able to take a service-linked role and assign it to a service. If you want to give Bob the ability to use a preexisting role with a service but not create or edit that role, you would need to provide Bob with PassRole permissions. This allows Bob to pass an existing role into an AWS service. It's an example of role separation, meaning Bob could configure a service with a role that has already been created by a member of the security team. Bob would just need ListRole and PassRole permissions on that specific role.
This is similar to when you use a pre-created role, for example, with a CloudFormation stack. By default, when creating a CloudFormation stack, CloudFormation uses the permissions of your identity to interact with AWS. This means you need permissions not only to create a stack but also to create the resources that the stack creates. However, you can give, for example, a user like Bob the ability to pass a role into CloudFormation. That role could have permissions that exceed those which Bob directly has. So a role that Bob uses could have the ability to create AWS resources that Bob does not. Bob might have access to create a stack and pass in a role, but the role provides CloudFormation with the permissions needed to interact with AWS.
PassRole is a method inside AWS that allows you to implement role separation, and it's something you can also use with service-linked roles. This is something I wanted to reiterate to emphasize that passing a role is a very important AWS security architecture.
That is everything I wanted to cover in this very brief lesson. It's really just an extension of what you've already learned about IAM roles, and it's something you'll use in demo lessons elsewhere in the course.
For now, I just want you to be aware of how service-linked roles differ from normal roles and how the PassRole architecture works. With that being said, that's everything I wanted to cover in this video.
So go ahead and complete the video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this video, I want to talk about AWS Control Tower. This is a product which is becoming required knowledge if you need to use AWS in the real world. And because of this, it's starting to feature more and more in all of the AWS exams. I want this to be a lesson applicable to all of the AWS study paths, so think of this as a foundational lesson. And if required, for the course that you're studying, I might be going into additional detail. We do have a lot to cover, so let's jump in and get started.
At a high level, Control Tower has a simple but wide-ranging job, and that's to allow the quick and easy setup of multi-account environments. You might be asking, "Doesn't AWS Organizations already do that?" Well, kind of. Control Tower actually orchestrates other AWS services to provide the functionality that it does, and one of those services is AWS Organizations. But it goes beyond that. Control Tower uses Organizations, IAM Identity Center, which is the product formerly known as AWS SSO. It also uses CloudFormation, AWS Config, and much more. You can think of Control Tower as another evolution of AWS Organizations adding significantly more features, intelligence, and automation.
There are a few different parts of Control Tower which you need to understand, and it's worth really focusing on understanding the distinction now because we're going to be building on this later. First, we've got the Landing Zone, and simply put, this is the multi-account environment part of Control Tower. This is what most people will be interacting with when they think of Control Tower. Think of this like AWS Organizations only with superpowers. It provides, via other AWS services, single sign-on and ID Federation so you can use a single login across all of your AWS accounts, and even share this with your existing corporate identity store. And this is provided using the IAM Identity Center, again, the service formerly known as AWS SSO. It also provides centralized logging and auditing, and this uses a combination of CloudWatch, CloudTrail, AWS Config, and SNS. Everything else in the Control Tower product surrounds this Landing Zone, and I'll show you how this looks later in this lesson.
Control Tower also provides guardrails, again, more detail on this is coming up soon. But these are designed to either detect or mandate rules and standards across all AWS accounts within the Landing Zone. You also have the Account Factory, which provides really cool automation for account creation, and adds features to standardize the creation of those accounts. This goes well beyond what AWS Organizations can do on its own, and I'll show you how this works over the rest of this lesson. And if applicable, for the path that you're studying, there will be a demo coming up elsewhere in the course. Finally, there's a dashboard which offers a single-page oversight of the entire organization. At a high level, that's what you get with Control Tower.
Now, things always make more sense visually, so let's step through this high-level architecture visually, and I hope this will add a little bit more context. We start with Control Tower itself, which like AWS Organizations, is something you create from within an AWS account. And this account becomes the management account at the Landing Zone. At this top, most level within the management account, we have Control Tower itself, which orchestrates everything. We have AWS Organizations, and as you've already experienced, this provides the multi-account structure, so organizational units and service control policies. And then, we have single sign-on provided by the IAM Identity Center, which historically was known as AWS SSO. This allows for, as the name suggests, single sign-on, which means we can use the same set of internal or federated identities to access everything in the Landing Zone that we have permissions to. This works in much the same way as AWS SSO worked, but it's all set up and orchestrated by Control Tower.
When Control Tower is first set up, it generally creates two organizational units. The foundational organizational units, which by default is called Security, and a custom organizational unit, which by default is named Sandbox. Inside the foundational or security organizational unit, Control Tower creates two AWS accounts, the Audit account and the Log Archive account. The Log Archive account is for users that need access to all logging information for all of your enrolled accounts within the Landing Zone. Examples of things used within this account are AWS Config and CloudTrail logs, so they're stored within this account so that they're isolated. You have to explicitly grant access to this account, and it offers a secure, read-only Archive account for logging.
The Audit account is for your users who need access to the audit information made available by Control Tower. You can also use this account as a location for any third-party tools to perform auditing of your environment. It's in this account that you might use SNS for notifications of changes to governance and security policies, and CloudWatch for monitoring Landing Zone wide metrics. It's at this point where Control Tower becomes really awesome because we have the concept of an Account Factory. Think of this as a team of robots who are creating, modifying, or deleting AWS accounts as your business needs them. And this can be interacted with both from the Control Tower console or via the Service Catalog.
Within the custom organizational unit, Account Factory will create AWS accounts in a fully automated way as many of them as you need. The configuration of these accounts is handled by Account Factory. So, from an account and networking perspective, you have baseline or cookie-cutter configurations applied, and this ensures a consistent configuration across all AWS accounts within your Landing Zone. Control Tower utilizes CloudFormation under the covers to implement much of this automation, so expect to see stacks created by the product within your environment. And Control Tower uses both AWS Config and Service Control Policies to implement account guardrails. And these detect drifts away from governance standards, or prevent those drifts from occurring in the first place.
At a high level, this is how Control Tower looks. Now the product can scale from simple to super complex. This is a product which you need to use in order to really understand. And depending on the course that you're studying, you might have the opportunity to get some hands-on later in the course. If not, don't worry, that means that you only need this high-level understanding for the exam.
Let's move on and look at the various parts of Control Tower in a little bit more detail. Let's quickly step through the main points at the Landing Zone. It's a feature designed to allow anyone to implement a well-architected, multi-account environment, and it has the concept of a home region, which is the region that you initially deploy the product into, for example, us-east-1. You can explicitly allow or deny the usage of other AWS regions, but the home region, the one that you deploy into, is always available. The Landing Zone is built using AWS Organizations, AWS Config, CloudFormation, and much more. Essentially, Control Tower is a product which brings the features of lots of different AWS products together and orchestrates them.
I've mentioned that there's a concept of the foundational OU, by default called the Security OU, and within this, Log Archive and Audit AWS accounts. And these are used mainly for security and auditing purposes. You've also got the Sandbox OU which is generally used for testing and less rigid security situations. You can create other organizational units and accounts, and for a real-world deployment of Control Tower, you're generally going to have lots of different organizational units. Potentially, even nested ones to implement a structure which works for your organization.
Landing Zone utilizes the IAM Identity Center, again, formerly known as AWS SSO, to provide SSO or single sign-on services across multiple AWS accounts within the Landing Zone, and it's also capable of ID Federation. And ID Federation simply means that you can use your existing identity stores to access all of these different AWS accounts. The Landing Zone provides monitoring and notifications using CloudWatch and SNS, and you can also allow end users to provision new AWS accounts within the Landing Zone using Service Catalog.
This is the Landing Zone at a high level. Let's next talk about guardrails. Guardrails are essentially rules for multi-account governance. Guardrails come in three different types: mandatory, strongly recommended, or elective. Mandatory ones are always applied. Strongly recommended are obviously strongly recommended by AWS. And elective ones can be used to implement fairly niche requirements, and these are completely optional.
Guardrails themselves function in two different ways. We have preventative, and these stop you doing things within your AWS accounts in your Landing Zone, and these are implemented using Service Control policies, which are part of the AWS Organizations product. These guardrails are either enforced or not enabled, so you can either enforce them or not. And if they're enforced, it simply means that any actions defined by that guardrail are prevented from occurring within any of your AWS accounts. An example of this might be to allow or deny usage of AWS regions, or to disallow bucket policy changes within accounts inside your Landing Zone.
The second functional type of guardrail is detective, and you can think of this as a compliance check. This uses AWS Config rules and allows you to check that the configuration of a given thing within an AWS account matches what you define as best practice. These type of guardrails are either clear, in violation, or not enabled. And an example of this would be a detective guardrail to check whether CloudTrail is enabled within an AWS account, or whether any EC2 instances have public IPv4 addresses associated with those instances. The important distinction to understand here is that preventative guardrails will stop things occurring, and detective guardrails will only identify those things. So, guardrails are a really important security and governance construct within the Control Tower product.
Lastly, I want to talk about the Account Factory itself. This is essentially a feature which allows automated account provisioning, and this can be done by either cloud administrators or end users with appropriate permissions. And this automated provisioning includes the application of guardrails, so any guardrails which are defined can be automatically applied to these automatically provisioned AWS accounts.
Because these accounts can be provisioned by end users, think of these as members of your organization, then either these members of your organization or anyone that you define can be given admin permissions on an AWS account which is automatically provisioned. This allows you to have a truly self-service, automatic process for provisioning AWS accounts so you can allow any member of your organization within tightly controlled parameters to be able to provision accounts for any purpose which you define as okay. And that person will be given admin rights over that AWS account. These can be long-running accounts or short-term accounts. These accounts are also configured with standard account and network configuration. If you have any organizational policies for how networking or any account settings are configured, these automatically provisioned accounts will come with this configuration. And this includes things like the IP addressing used by VPCs within the accounts, which could be automatically configured to avoid things like addressing overlap. And this is really important when you're provisioning accounts at scale.
The Account Factory allows accounts to be closed or repurposed, and this whole process can be tightly integrated with a business's SDLC or software development life cycle. So, as well as doing this from the console UI, the Control Tower product and Account Factory can be integrated using APIs into any SDLC processes that you have within your organization. If you need accounts to be provisioned as part of a certain stage of application development, or you want accounts to be provisioned as part of maybe client demos or software testing, then you can do this using the Account Factory feature.
At this point, that is everything I wanted to cover at this high level about Control Tower. If you need practical experience of Control Tower for the course that you are studying, there will be a demo lesson coming up elsewhere in the course, which gives you that practical experience. Don't be concerned if this is the only lesson that there is, or if there's this lesson plus additional deep-dive theory. I'll make sure, for whatever course you're studying, you have enough exposure to Control Tower.
With that being said, though, that is the end of this high-level video. So go ahead and complete the video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and welcome to this CloudTrail demo where we're going to set up an organizational trail and configure it to log data for all accounts in our organization to S3 and CloudWatch logs.
The first step is that you'll need to be logged into the IAM admin user of the management account of the organization. As a reminder, this is the general account. To set up an organizational trail, you always need to be logged into the management account. To set up individual trails, you can do that locally inside each of your accounts, but it's always more efficient to use an organizational trail.
Now, before we start the demonstration, I want to talk briefly about CloudTrail pricing. I'll make sure this link is in the lesson description, but essentially there is a fairly simple pricing structure to CloudTrail that you need to be aware of.
The 90-day history that's enabled by default in every AWS account is free. You don't get charged for that; it comes free by default with every AWS account. Next, you have the ability to get one copy of management events free in every region in each AWS account. This means creating one trail that's configured for management events in each region in each AWS account, and that comes for free. If you create any additional trails, so you get any additional copies of management events, they are charged at two dollars per 100,000 events. That won't apply to us in this demonstration, but you need to be aware of that if you're using this in production.
Logging data events comes at a charge regardless of the number, so we're not going to enable data events for this demo lesson. But if you do enable it, then that comes at a charge of 10 cents per 100,000 events, irrespective of how many trails you have. This charge applies from the first time you're logging any data events.
What we'll be doing in this demo lesson is setting up an organizational trail which will create a trail in every region in every account inside the organization. But because we get one for free in every region in every account, we won't incur any charges for the CloudTrail side of things. We will be charged for any S3 storage that we use. However, S3 also comes with a free tier allocation for storage, which I don't expect us to breach.
With that being said, let's get started and implement this solution. To do that, we need to be logged in to the console UI again in the management account of the organization. Then we need to move to the CloudTrail console. If you've been here recently, it will be in the Recently Visited Services. If not, just type CloudTrail in the Find Services box and then open the CloudTrail console.
Once you're at the console, you might see a screen like this. If you do, then you can just click on the hamburger menu on the left and then go ahead and click on trails. Now, depending on when you're doing this demo, if you see any warnings about a new or old console version, make sure that you select the new version so your console looks like what's on screen now.
Once you're here, we need to create a trail, so go ahead and click on create trail. To create a trail, you're going to be asked for a few important pieces of information, the first of which is the trail name. For trail name, we're going to use "animals4life.org," so just go ahead and enter that. By default, with this new UI version, when you create a trail, it's going to create it in all AWS regions in your account. If you're logged into the management account of the organization, as we are, you also have the ability to enable it for all regions in all accounts of your organization. We're going to do that because this allows us to have one single logging location for all CloudTrail logs in all regions in all of our accounts, so go ahead and check this box.
By default, CloudTrail stores all of its logs in an S3 bucket. When you're creating a trail, you have the ability to either create a new S3 bucket to use or you can use an existing bucket. We're going to go ahead and create a brand new bucket for this trail. Bucket names within S3 need to be globally unique, so it needs to be a unique name across all regions and across all AWS accounts. We're going to call this bucket starting with "CloudTrail," then a hyphen, then "animals-for-life," another hyphen, and then you'll need to put a random number. You’ll need to pick something different from me and different from every other student doing this demo. If you get an error about the bucket name being in use, you just need to change this random number.
You're also able to specify if you want the log files stored in the S3 bucket to be encrypted. This is done using SSE-KMS encryption. This is something that we'll be covering elsewhere in the course, and for production usage, you would definitely want to use it. For this demonstration, to keep things simple, we're not going to encrypt the log files, so go ahead and untick this box.
Under additional options, you're able to select log file validation, which adds an extra layer of security. This means that if any of the log files are tampered with, you have the ability to determine that. This is a really useful feature if you're performing any account-level audits. In most production situations, I do enable this, but you can also elect to have an SNS notification delivery. So, every time log files are delivered into this S3 bucket, you can have a notification. This is useful for production usage or if you need to integrate this with any non-AWS systems, but for this demonstration, we'll leave this one unchecked.
You also have the ability, as well as storing these log files into S3, to store them in CloudWatch logs. This gives you extra functionality because it allows you to perform searches, look at the logs from a historical context inside the CloudWatch logs user interface, as well as define event-driven processes. You can configure CloudWatch logs to scan these CloudTrail logs and, in the event that any particular piece of text occurs in the logs (e.g., any API call, any actions by a user), you can generate an event that can invoke, for example, a Lambda function or spawn some other event-driven processing. Don't worry if you don't understand exactly what this means at this point; I'll be talking about all of this functionality in detail elsewhere in the course. For this demonstration, we are going to enable CloudTrail to put these logs into CloudWatch logs as well, so check this box. You can choose a log group name within CloudWatch logs for these CloudTrail logs. If you want to customize this, you can, but we're going to leave it as the default.
As with everything inside AWS, if a service is acting on our behalf, we need to give it the permissions to interact with other AWS services, and CloudTrail is no exception. We need to give CloudTrail the ability to interact with CloudWatch logs, and we do that using an IAM role. Don’t worry, we’ll be talking about IAM roles in detail elsewhere in the course. For this demonstration, just go ahead and select "new" because we're going to create a new IAM role that will give CloudTrail the ability to enter data into CloudWatch logs.
Now we need to provide a role name, so go ahead and enter "CloudTrail_role_for_CloudWatch_logs" and then an underscore and then "animals_for_life." The name doesn’t really matter, but in production settings, you'll want to make sure that you're able to determine what these roles are for, so we’ll use a standard naming format. If you expand the policy document, you'll be able to see the exact policy document or IAM policy document that will be used to give this role the permissions to interact with CloudWatch logs. Don’t worry if you don’t fully understand policy documents at this point; we’ll be using them throughout the course, and over time you'll become much more comfortable with exactly how they're used. At a high level, this policy document will be attached to this role, and this is what will give CloudTrail the ability to interact with CloudWatch logs.
At this point, just scroll down; that's everything that we need to do, so go ahead and click on "next." Now, you'll need to select what type of events you want this trail to log. You’ve got three different choices. The default is to log only management events, so this logs any events against the account or AWS resources (e.g., starting or stopping an EC2 instance, creating or deleting an EBS volume). You've also got data events, which give you the ability to log any actions against things inside resources. Currently, CloudTrail supports a wide range of services for data event logging. For this demonstration, we won't be setting this up with data events initially because I’ll be covering this elsewhere in the course. So, go back to the top and uncheck data events.
You also have the ability to log insight events, which can identify any unusual activity, errors, or user behavior on your account. This is especially useful from a security perspective. For this demonstration, we won’t be logging any insight events; we’re just going to log management events. For management events, you can further filter down to read or write or both and optionally exclude KMS or RDS data API events. For this demo lesson, we’re just going to leave it as default, so make sure that read and write are checked. Once you've done that, go ahead and click on "next." On this screen, just review everything. If it all looks good, click on "create trail."
Now, if you get an error saying the S3 bucket already exists, you'll just need to choose a new bucket name. Click on "edit" at the top, change the bucket name to something that's globally unique, and then follow that process through again and create the trail.
Certainly! Here is the continuation and completion of the transcript:
After a few moments, the trail will be created. It should say "US East Northern Virginia" as the home region. Even though you didn't get the option to select it because it's selected by default, it is a multi-region trail. Finally, it is an organizational trail, which means that this trail is now logging any CloudTrail events from all regions in all accounts in this AWS organization.
Now, this isn't real-time, and when you first enable it, it can take some time for anything to start to appear in either S3 or CloudWatch logs. At this stage, I recommend that you pause the video and wait for 10 to 15 minutes before continuing, because the initial delivery of that first set of log files through to S3 can take some time. So pause the video, wait 10 to 15 minutes, and then you can resume.
Next, right-click the link under the S3 bucket and open that in a new tab. Go to that tab, and you should start to see a folder structure being created inside the S3 bucket. Let's move down through this folder structure, starting with CloudTrail. Go to US East 1 and continue down through this folder structure.
In my case, I have quite a few of these log files that have been delivered already. I'm going to pick one of them, the most recent, and just click on Open. Depending on the browser that you're using, you might have to download and then uncompress this file. Because I'm using Firefox, it can natively open the GZ compressed file and then automatically open the JSON log file inside it.
So this is an example of a CloudTrail event. We're able to see the user identity that actually generates this event. In this case, it's me, I am admin. We can see the account ID that this event is for. We can see the event source, the event name, the region, the source IP address, the user agent (in this case, the console), and all of the relevant information for this particular interaction with the AWS APIs are logged inside this CloudTrail event.
Don’t worry if this doesn’t make a lot of sense at this point. You’ll get plenty of opportunities to interact with this type of logging event as you go through the various theory and practical lessons within the course. For now, I just want to highlight exactly what to expect with CloudTrail logs.
Since we’ve enabled all of this logging information to also go into CloudWatch logs, we can take a look at that as well. So back at the CloudTrail console, if we click on Services and then type CloudWatch, wait for it to pop up, locate Logs underneath CloudWatch, and then open that in a new tab.
Inside CloudWatch, on the left-hand menu, look for Logs, and then Log Groups, and open that. You might need to give this a short while to populate, but once it does, you should see a log group for the CloudTrail that you’ve just created. Go ahead and open that log group.
Inside it, you’ll see a number of log streams. These log streams will start with your unique organizational code, which will be different for you. Then there will be the account number of the account that it represents. Again, these will be different for you. And then there’ll be the region name. Because I’m only interacting with the Northern Virginia region, currently, the only ones that I see are for US East 1.
In this particular account that I’m in, the general account of the organization, if I look at the ARN (Amazon Resource Name) at the top or after US East 1 here, this number is my account number. This is the account number of my general account. So if I look at the log streams, you’ll be able to see that this account (the general account) matches this particular log stream. You’ll be able to do the same thing in your account. If you look for this account ID and then match it with one of the log streams, you'll be able to pull the logs for the general AWS account.
If I go inside this particular log stream, as CloudTrail logs any activity in this account, all of that information will be populated into CloudWatch logs. And that’s what I can see here. If I expand one of these log entries, we’ll see the same formatted CloudTrail event that I just showed you in my text editor. So the only difference when using CloudWatch logs is that the CloudTrail events also get entered into a log stream in a log group within CloudWatch logs. The format looks very similar.
Returning to the CloudTrail console, one last thing I want to highlight: if you expand the menu on the left, whether you enable a particular trail or not, you’ve always got access to the event history. The event history stores a log of all CloudTrail events for the last 90 days for this particular account, even if you don’t have a specific trail enabled. This is standard functionality. What a trail allows you to do is customize exactly what happens to that data. This area of the console, the event history, is always useful if you want to search for a particular event, maybe check who’s logged onto the account recently, or look at exactly what the IAM admin user has been doing within this particular AWS account.
The reason why we created a trail is to persistently store that data in S3 as well as put it into CloudWatch logs, which gives us that extra functionality. With that being said, that’s everything I wanted to cover in this demo lesson.
One thing you need to be aware of is that S3, as a service, provides a certain amount of resource under the free tier available in every new AWS account, so you can store a certain amount of data in S3 free of charge. The problem with CloudTrail, and especially organizational trails, is that they generate quite a large number of requests. There is also, in addition to space, a number of requests per month that are part of the free tier.
If you leave this CloudTrail enabled for the duration of your studies, for the entire month, it is possible that this will go slightly over the free tier allocation for requests within the S3 service. You might see warnings that you’re approaching a billable threshold, and you might even get a couple of cents of bill per month if you leave this enabled all the time. To avoid that, if you just go to Trails, open up the trail that you’ve created, and then click on Stop Logging. You’ll need to confirm that by clicking on Stop Logging, and at that point, no logging will occur into the S3 bucket or into CloudWatch logs, and you won’t experience those charges.
For any production usage, the low cost of this service means that you would normally leave it enabled in all situations. But to keep costs within the free tier for this course, you can, if required, just go ahead and stop the logging. If you don’t mind a few cents per month of S3 charges for CloudTrail, then by all means, go ahead and leave it enabled.
With that being said, that’s everything I wanted to cover in this demo lesson. So go ahead, complete the lesson, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this lesson, where I'm going to be introducing CloudTrail.
CloudTrail is a product that logs API actions which affect AWS accounts. If you stop an instance, that's logged. If you change a security group, that's logged too. If you create or delete an S3 bucket, that's logged by CloudTrail. Almost everything that can be done to an AWS account is logged by this product.
Now, I want to quickly start with the CloudTrail basics. The product logs API calls or account activities, and every one of those logged activities is called a CloudTrail event. A CloudTrail event is a record of an activity in an AWS account. This activity can be an action taken by a user, a role, or a service.
CloudTrail by default stores the last 90 days of CloudTrail events in the CloudTrail event history. This is an area of CloudTrail which is enabled by default in AWS accounts. It's available at no cost and provides 90 days of history on an AWS account.
If you want to customize CloudTrail in any way beyond this 90-day event history, you need to create a trail. We'll be looking at the architecture of a trail in a few moments' time.
CloudTrail events can be one of three different types: management events, data events, and insight events. If applicable to the course you are studying, I'll be talking about insight events in a separate video. For now, we're going to focus on management events and data events.
Management events provide information about management operations performed on resources in your AWS account. These are also known as control plane operations. Think of things like creating an EC2 instance, terminating an EC2 instance, creating a VPC. These are all control plane operations.
Data events contain information about resource operations performed on or in a resource. Examples of this might be objects being uploaded to S3 or objects being accessed from S3, or when a Lambda function is invoked. By default, CloudTrail only logs management events because data events are often much higher volume. Imagine if every access to an S3 object was logged; it could add up pretty quickly.
A CloudTrail trail is the unit of configuration within the CloudTrail product. It's a way you provide configuration to CloudTrail on how to operate. A trail logs events for the AWS region that it's created in. That's critical to understand. CloudTrail is a regional service.
When you create a trail, it can be configured to operate in one of two ways: as a one-region trail or as an all-regions trail. A single-region trail is only ever in the region that it's created in, and it only logs events for that region. An all-regions trail, on the other hand, can be thought of as a collection of trails in every AWS region, but it's managed as one logical trail. It also has the additional benefit that if AWS adds any new regions, the all-regions trail is automatically updated.
This is a specific configuration item on a trail which determines if it only logs events for the region that it's in or if it also logs global services events. Most services log events in the region where the event occurred. For example, if you create an EC2 instance in AP Southeast 2, it’s logged to that region. A trail would need to be either a one-region trail in that region or an all-regions trail to capture that event.
A very small number of services log events globally to one region. For example, global services such as IAM, STS, or CloudFront are very globally-focused services and always log their events to US East 1, which is Northern Virginia. These types of events are called global service events, and a trail needs to have this enabled in order to log these events. This feature is normally enabled by default if you create a trail inside the user interface.
AWS services are largely split up into regional services and global services. When these different types of services log to CloudTrail, they either log in the region that the event is generated in or they log to US East 1 if they are global services. So, when you're diagnosing problems or architecting solutions, if the logs you are trying to reach are generated by global services like IAM, STS, or CloudFront, these will be classified as global service events and that will need to be enabled on a trail.
Otherwise, a trail will only log events for the isolated region that it’s created in. When you create a trail, it is one of two types: one-region or all-regions. A one-region trail is always isolated to that one region, and you would need to create one-region trails in every region if you wanted to do it manually. Alternatively, you could create an all-regions trail, which encompasses all of the regions in AWS and is automatically updated as AWS adds new regions.
Once you’ve created a trail, management events and data events are all captured by the trail based on whether it's isolated to a region or set to all regions. For an all-region trail, it captures management events and, if enabled, data events. Data events are not generally enabled by default and must be explicitly set when creating a trail. This trail will then listen to everything that's occurring in the account.
Remember that the CloudTrail event history is limited to 90 days. However, when you create a trail, you can be much more flexible. A trail by default can store the events in a definable S3 bucket, and the logs generated and stored in an S3 bucket can be stored there indefinitely. You are only charged for the storage used in S3. These logs are stored as a set of compressed JSON log files, which consume minimal space. Being JSON formatted, they can be read by any tooling capable of reading standard format files, which is a great feature of CloudTrail.
Another option is that CloudTrail can be integrated with CloudWatch Logs, allowing data to be stored in that product. CloudTrail can take all the logging data it generates and, in addition to putting it into S3, it can also put it into CloudWatch Logs. Once it's in CloudWatch Logs, you can use that product to search through it or use a metric filter to take advantage of the data stored there. This makes it much more powerful and gives you access to many more features if you use CloudWatch Logs versus S3.
One of the more recent additions to the CloudTrail product is the ability to create an organizational trail. If you create this trail from the management account of an organization, it can store all the information for all the accounts inside that organization. This provides a single management point for all API and account events across every account in the organization, which is super powerful and makes managing multi-account environments much easier.
So, we need to talk through some important elements of CloudTrail point by point. CloudTrail is enabled by default on AWS accounts, but it’s only the 90-day event history that’s enabled by default. You don’t get any storage in S3 unless you configure a trail. Trails are how you can take the data that CloudTrail’s got access to and store it in better places, such as S3 and CloudWatch Logs.
The default for trails is to store management events only, which includes management plane events like creating an instance, stopping an instance, terminating an instance, creating or deleting S3 buckets, and logins to the console. Anything interacting with AWS products and services from a management perspective is logged by default in CloudTrail. Data events need to be specifically enabled and come at an extra cost. I’ll discuss this in more detail in the demo lesson, as you need to be aware of the pricing of CloudTrail. Much of the service is free, but there are certain elements that do carry a cost, especially if you use it in production.
Most AWS services log data to the same region that the service is in. There are a few specific services, such as IAM, STS, and CloudFront, which are classified as true global services and log their data as global service events to US East 1. A trail needs to be enabled to capture that data.
That’s critical and might come up as an exam question. What you will also definitely find coming up as an exam-style question is where to use CloudTrail for real-time logging. This is one of the limitations of the product—it is not real-time. CloudTrail typically delivers log files within 15 minutes of the account activity occurring and generally publishes log files multiple times per hour. This means it's not real-time; you can't rely on CloudTrail to provide a complete and exhaustive list of events up to the very point you're looking. Sometimes, it takes a few minutes for the data to arrive in S3 or CloudWatch Logs. Keep this in mind if you face any exam questions about real-time logging—CloudTrail is not the product.
Okay, so that's the end of the theory in this lesson. It's time for a demo. In the next lesson, we’ll be setting up an organizational trail within our AWS account structure. We’ll configure it to capture all the data for all our member accounts and our management account, storing this data in an S3 bucket and CloudWatch Logs within the management account. I can’t wait to get started. It’s a fun one and will prove very useful for both the exam and real-world usage.
So go ahead, complete this video, and when you're ready, you can join me in the demo lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this lesson, where I'm going to introduce the theory and architecture of CloudWatch Logs.
I've already covered the metrics side of CloudWatch earlier in the course, and I'm covering the logs part now because you'll be using it when we cover CloudTrail. In the CloudTrail demo, we'll be setting up CloudTrail and using CloudWatch Logs as a destination for those logs. So, you'll need to understand it, and we'll be covering the architecture in this lesson. Let's jump in and get started.
CloudWatch Logs is a public service. The endpoint to which applications connect is hosted in the AWS public zone. This means you can use the product within AWS VPCs, from on-premises environments, and even other cloud platforms, assuming that you have network connectivity as well as AWS permissions.
The CloudWatch Logs product allows you to store, monitor, and access logging data. Logging data, at a very basic level, consists of a piece of information, data, and a timestamp. The timestamp generally includes the year, month, day, hour, minute, second, and timezone. There can be more fields, but at a minimum, it's generally a timestamp and some data.
CloudWatch Logs has built-in integrations with many AWS services, including EC2, VPC Flow Logs, Lambda, CloudTrail, Route 53, and many more. Any services that integrate with CloudWatch Logs can store data directly inside the product. Security for this is generally provided by using IAM roles or service roles.
For anything outside AWS, such as logging custom application or OS logs on EC2, you can use the unified CloudWatch agent. I’ve mentioned this before and will be demoing it later in the EC2 section of the course. This is how anything outside of AWS products and services can log data into CloudWatch Logs. So, it’s either AWS service integrations or the unified CloudWatch agent. There is a third way, using development kits for AWS to implement logging into CloudWatch Logs directly into your application, but that tends to be covered in developer and DevOps AWS courses. For now, just remember either AWS service integrations or the unified CloudWatch agent.
CloudWatch Logs are also capable of taking logging data and generating a metric from it, known as a metric filter. Imagine a situation where you have a Linux instance, and one of the operating system log files logs any failed connection attempts via SSH. If this logging information was injected into CloudWatch Logs, a metric filter can scan those logs constantly. Anytime it sees a mention of the failed SSH connection, it can increment a metric within CloudWatch. You can then have alarms based on that metric, and I’ll be demoing that very thing later in the course.
Let’s look at the architecture visually because I'll be showing you how this works in practice in the CloudTrail demo, which will be coming up later in the section. Architecturally, CloudWatch Logs looks like this: It’s a regional service. So, for this example, let’s assume we’re talking about us-east-1.
The starting point is our logging sources, which can include AWS products and services, mobile or server-based applications, external compute services (virtual or physical servers), databases, or even external APIs. These sources inject data into CloudWatch Logs as log events.
Log events consist of a timestamp and a message block. CloudWatch Logs treats this message as a raw block of data. It can be anything you want, but there are ways the data can be interpreted, with fields and columns defined. Log events are stored inside log streams, which are essentially a sequence of log events from the same source.
For example, if you had a log file stored on multiple EC2 instances that you wanted to inject into CloudWatch Logs, each log stream would represent the log file for one instance. So, you’d have one log stream for instance one and one log stream for instance two. Each log stream is an ordered set of log events for a specific source.
We also have log groups, which are containers for multiple log streams of the same type of logging. Continuing the example, we would have one log group containing everything for that log file. Inside this log group would be different log streams, each representing one source. Each log stream is a collection of log events. Every time an item was added to the log file on a single EC2 instance, there would be one log event inside one log stream for that instance.
A log group also stores configuration settings, such as retention settings and permissions. When we define these settings on a log group, they apply to all log streams within that log group. It’s also where metric filters are defined. These filters constantly review any log events for any log streams in that log group, looking for certain patterns, such as an application error code or a failed SSH login. When detected, these metric filters increment a metric, and metrics can have associated alarms. These alarms can notify administrators or integrate with AWS or external systems to take action.
CloudWatch Logs is a powerful product. This is the high-level architecture, but don’t worry—you’ll get plenty of exposure to it throughout the course because many AWS products integrate with CloudWatch Logs and use it to store their logging data. We’ll be coming back to this product time and again as we progress through the course. CloudTrail uses CloudWatch Logs, Lambda uses CloudWatch Logs, and VPC Flow Logs use CloudWatch Logs. There are many examples of AWS products where we’ll be integrating them with CloudWatch Logs.
I just wanted to introduce it at this early stage of the course. That’s everything I wanted to cover in this theory lesson. Thanks for watching. Go ahead, complete this video, and when you’re ready, join me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this demo lesson, I want to give you some experience working with Service Control Policies (SCPs).
At this point, you've created the AWS account structure which you'll be using for the remainder of the course. You've set up an AWS organization, with the general account that created it becoming the management account. Additionally, you've invited the production AWS account into the organization and created the development account within it.
In this demo lesson, I want to show you how you can use SCPs to restrict what identities within an AWS account can do. This is a feature of AWS Organizations.
Before we dive in, let's tidy up the AWS organization. Make sure you're logged into the general account, the management account of the organization, and then navigate to the organization's console. You can either type that into the 'Find Services' box or select it from 'Recently Used Services.'
As discussed in previous lessons, AWS Organizations allows you to organize accounts with a hierarchical structure. Currently, there's only the root container of the organization. To create a hierarchical structure, we need to add some organizational units. We will create a development organizational unit and a production organizational unit.
Select the root container at the top of the organizational structure. Click on "Actions" and then "Create New." For the production organizational unit, name it 'prod.' Scroll down and click on "Create Organizational Unit." Next, do the same for the development unit: select 'Route,' click on "Actions," and then "Create New." Under 'Name,' type 'dev,' scroll down, and click on "Create Organizational Unit."
Now, we need to move our AWS accounts into these relevant organizational units. Currently, the Development, Production, and General accounts are all contained in the root container, which is the topmost point of our hierarchical structure.
To move the accounts, select the Production AWS account, click on "Actions," and then "Move." In the dialogue that appears, select the Production Organizational Unit and click "Move." Repeat this process for the Development AWS account: select the Development AWS account, click "Actions," then "Move," and select the 'dev' OU before clicking "Move."
Now, we've successfully moved the two AWS accounts into their respective organizational units. If you select each organizational unit in turn, you can see that 'prod' contains the production AWS account, and 'dev' contains the development AWS account. This simple hierarchical structure is now in place.
To prepare for the demo part of this lesson where we look at SCPs, move back to the AWS console. Click on AWS, then the account dropdown, and switch roles into the production AWS account by selecting 'Prod' from 'Role History.'
Once you're in the production account, create an S3 bucket. Type S3 into the 'Find Services' box or find it in 'Recently Used Services' and navigate to the S3 console. Click on "Create Bucket." For the bucket name, call it 'CatPics' followed by a random number—S3 bucket names must be globally unique. I’ll use 1, lots of 3s, and then 7. Ensure you select the US East 1 region for the bucket. Scroll down and click "Create Bucket."
After creating the bucket, go inside it and upload some files. Click on "Add Files," then download the cat picture linked to this lesson to your local machine. Upload this cat picture to the S3 bucket by selecting it and clicking "Open," then "Upload" to complete the process.
Once the upload finishes, you can view the picture of Samson. Click on it to see Samson looking pretty sleepy. This demonstrates that you can currently access the Samson.jpg object while operating within the production AWS account.
The key point here is that you’ve assumed an IAM role. By switching roles into the production account, you’ve assumed the role called "organization account access role," which has the administrator access managed policy attached.
Now, we’ll demonstrate how this can be restricted using SCPs. Move back to the main AWS console. Click on the account dropdown and switch back to the general AWS account. Navigate to AWS Organizations, then Policies. Currently, most options are disabled, including Service Control Policies, Tag Policies, AI Services, Opt-out Policies, and Backup Policies.
Click on Service Control Policies and then "Enable" to activate this functionality. This action adds the "Full AWS Access" policy to the entire organization, which imposes no restrictions, so all AWS accounts maintain full access to all AWS services.
To create our own service control policy, download the file named DenyS3.json linked to this lesson and open it in a code editor. This SCP contains two statements. The first statement is an allow statement with an effect of allow, action as star (wildcard), and resource as star (wildcard). This replicates the full AWS access SCP applied by default. The second statement is a deny statement that denies any S3 actions on any AWS resource. This explicit deny overrides the explicit allow for S3 actions, resulting in access to all AWS services except S3.
Copy the content of the DenyS3.json file into your clipboard. Move back to the AWS console, go to the policy section, and select Service Control Policies. Click "Create Policy," delete the existing JSON in the policy box, and paste the copied content. Name this policy "Allow all except S3" and create it.
Now, go to AWS Accounts on the left menu, select the prod OU, and click on the Policies tab. Attach the new policy "Allow all except S3" by clicking "Attach" in the applied policies box. We will also detach the full AWS access policy directly attached. Check the box next to full AWS access, click "Detach," and confirm by clicking "Detach Policy."
Now, the only service control policy directly attached to production is "Allow all except S3," which allows access to all AWS products and services except S3.
To verify, go back to the main AWS console and switch roles into the production AWS account. Go to the S3 console and you should receive a permissions error, indicating that you don't have access to list buckets. This is because the SCP attached to the production account explicitly denies S3 access. Access to other services remains unaffected, so you can still interact with EC2.
If we switch back to the general account, reattach the full AWS access policy, and detach "Allow all except S3," the production account will regain access to S3. By following the same process, you’ll be able to access the S3 bucket and view the object once again.
This illustrates how SCPs can be used to restrict access for identities within an AWS account, in this case, the production AWS account.
To clean up, delete the bucket. Select the catpics bucket, click "Empty," type "permanently delete," and select "Empty." Once that's done, you can delete the bucket by selecting it, clicking "Delete," confirming the bucket name, and then clicking "Delete Bucket."
You’ve now demonstrated full control over S3, evidenced by successfully deleting the bucket. This concludes the demo lesson. You’ve created and applied an SCP that restricts S3 access, observed its effects, and cleaned up. We’ll discuss more about boundaries and restrictions in future lessons. For now, complete this video, and I'll look forward to seeing you in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this lesson, I'll be talking about service control policies, or SCPs. SCPs are a feature of AWS Organizations which can be used to restrict AWS accounts. They're an essential feature to understand if you are involved in the design and implementation of larger AWS platforms. We've got a lot to cover, so let's jump in and get started.
At this point, this is what our AWS account setup looks like. We've created an organization for Animals4life, and inside it, we have the general account, which from now on I'll be referring to as the management account, and then two member accounts, so production, which we'll call prod, and development, which we'll be calling dev. All of these AWS accounts are within the root container of the organization. That's to say they aren't inside any organizational units. In the next demo lesson, we're going to be adding organizational units, one for production and one for development, and we'll be putting the member accounts inside their respective organizational units.
Now, let's talk about service control policies. The concept of a service control policy is simple enough. It's a policy document, a JSON document, and these service control policies can be attached to the organization as a whole by attaching them to the root container, or they can be attached to one or more organizational units. Lastly, they can even be attached to individual AWS accounts. Service control policies inherit down the organization tree. This means if they're attached to the organization as a whole, so the root container of the organization, then they affect all of the accounts inside the organization. If they're attached to an organizational unit, then they impact all accounts directly inside that organizational unit, as well as all accounts within OUs inside that organizational unit. If you have nested organizational units, then by attaching them to one OU, they affect that OU and everything below it. If you attach service control policies to one or more accounts, then they just directly affect those accounts that they're attached to.
Now, I mentioned in an earlier lesson that the management account of an organization is special. One of the reasons it's special is that even if the management account has service control policies attached, either directly via an organizational unit, or on the root container of the organization itself, the management account is never affected by service control policies. This can be both beneficial and it can be a limitation, but as a minimum, you need to be aware of it as a security practice. Because the management account can't be restricted using service control policies, I generally avoid using the management account for any AWS resources. It's the only AWS account within AWS Organizations which can't be restricted using service control policies. As a takeaway, just remember that the management account is special and it's unaffected by any service control policies, which are attached to that account either directly or indirectly.
Now, service control policies are account permissions boundaries. What I mean by that is they limit what the AWS account can do, including the Account Root User within that account. I talked earlier in the course about how you can't restrict an Account Root User. And that is true. You can't directly restrict what the Account Root User of an AWS account can do. The Account Root User always has full permissions over that entire AWS account, but with a service control policy, you're actually restricting what the account itself can do, specifically any identities within that account. So you're indirectly restricting the Account Root User because you're reducing the allowed permissions on the account; you're also reducing what the effective permissions on the Account Root User are. This is a really fine detail to understand. You can never restrict the Account Root User. It will always have 100% access to the account, but if you restrict the account, then in effect, you're also restricting the Account Root User.
Now, you might apply a service control policy to prevent any usage of that account outside a known region, for example, us-east-1. You might also apply a service control policy which only allows a certain size of EC2 instance to be used within the account. Service control policies are a really powerful feature for any larger, more complex AWS deployments. The critical thing to understand about service control policies is they don't grant any permissions. Service control policies are just a boundary. They define the limit of what is and isn't allowed within the account, but they don't grant permissions. You still need to give identities within that AWS account permissions to AWS resources, but any SCPs will limit the permissions that can be assigned to individual identities.
You can use service control policies in two ways. You can block by default and allow certain services, which is an allow list. Or you can allow by default and block access to certain services, which is a deny list. The default is a deny list. When you enable SCPs on your organization, AWS applies a default policy called full AWS access. This is applied to the organization and all OUs within that organization. This policy means that in the default implementation, service control policies have no effect since nothing is restricted. As a reminder, service control policies don't grant permissions, but when SCPs are enabled, there is an implicit default deny, just like IAM policies. If you had no initial allow, then everything would be denied. So the default is this full access policy, which essentially means no restrictions. It has the effect of making SCPs a deny list architecture, so you need to add any restrictions that you want to any AWS accounts within the organization. An example is that you could add another policy, such as this one, called DenyS3. This adds a deny policy for the entire S3 set of API operations, effectively denying S3. You need to remember that SCPs don't actually grant any access rights, but they establish which permissions can be granted in an account. The same priority rules apply: deny, allow, deny. Anything explicitly allowed in an SCP is a service which can have access granted to identities within that account, unless there's an explicit deny within an SCP, then a service cannot be granted. Explicit deny always wins. And in the absence of either, if we didn't have this full AWS access policy in place, then there would be an implicit deny, which blocks access to everything.
The benefit of using deny lists is that because your foundation is to allow wildcard access, so all actions on all resources, as AWS extends the amounts of products and services which are available inside the platform, this allow list constantly expands to cover those services, so it's fairly low admin overhead. You simply need to add any services which you want to deny access to via an explicit deny. In certain situations, you might need to be more conscious about usage in your accounts, and that's where you'd use allow lists. To implement allow lists, it's a two-part architecture. One part of it is to remove the AWS full access policy. This means that only the implicit default deny is in place and active, and then you would need to add any services which you want to allow into a new policy. In this case, S3 and EC2. So in this architecture, we wouldn't have this full AWS access. We would be explicitly allowing S3 and EC2 access. So no matter what identity permissions identities in this account are provided with, they would only ever be allowed to access S3 and EC2. This is more secure because you have to explicitly say which services can be allowed access for users in those accounts, but it's much easier to make a mistake and block access to services which you didn't intend to. It's also much more admin overhead because you have to add services as your business requirements dictate. You can't simply have access to everything and deny services you don't want access to. With this type of architecture, you have to explicitly add each and every service which you want identities within the account to be able to access. Generally, I would normally suggest using a deny list architecture because, simply put, it's much lower admin overhead.
Before we go into a demo, I want to visually show you how SCPs affect permissions. This is visually how SCPs impact permissions within an AWS account. In the left orange circle, this represents the different services that have been granted access to identities in an account using identity policies. On the right in red, this represents which services an SCP allows access to. So the SCP states that the three services in the middle and the service on the right are allowed access as far as the SCP is concerned, and the identity policies which were applied to identities within the account, so the orange circle on the left, grant access to four different services: the three in the middle and the one on the left.
Only permissions which are allowed within identity policies in the account and are allowed by a service control policy are actually active. On the right, this access permission has no effect because while it's allowed within an SCP, an SCP doesn't grant access to anything; it just controls what can and can't be allowed by identity policies within that account. Because no identity policy allows access to this resource, then it has no effect. On the left, this particular access permission is allowed within an identity policy, but it's not effectively allowed because it's not allowed within an SCP. So only things which are involved, the identity policy and an SCP, are actually allowed. In this case, this particular access permission on the left has no effect because it's not within a service control policy, so it's denied.
At an associate level, this is what you need to know for the exam. It's just simply understanding that your effective permissions for identities within an account are the overlap between any identity policies and any applicable SCPs. This is going to make more sense if you experience it with a demo, so this is what we're going to do next. Now that you've set up the AWS organization for the Animals4life business, it's time to put some of this into action. So I'm going to finish this lesson here and then in the next lesson, which is a demo, we're going to continue with the practical part of implementing SCPs. So go ahead and complete this video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back! In this demo lesson, you're going to create the AWS account structure which you'll use for the remainder of the course. At this point, you need to log in to the general AWS account. I’m currently logged in as the IAM admin user of my general AWS account, with the Northern Virginia region selected.
You’ll need either two different web browsers or a single web browser like Firefox that supports different sessions because we’ll be logged into multiple AWS accounts at once. The first task is to create the AWS organization. Since I'm logged in to a standard AWS account that isn’t part of an AWS organization, it’s neither a management account nor a member account. We need to move to the AWS Organizations part of the console and create the organization.
To start, go to "Find Services," type "Organizations," and click to move to the AWS Organizations console. Once there, click "Create Organization." This will begin the process of creating the AWS organization and convert the standard account into the management account of the organization. Click on "Create Organization" to complete the process. Now, the general account is the management account of the AWS organization.
You might see a message indicating that a verification email has been sent to the email address associated with the general AWS account. Click the link in that email to verify the address and continue using AWS Organizations. If you see this notification, verify the email before proceeding. If not, you can continue.
Now, open a new web browser or a browser session like Firefox and log in to the production AWS account. Ensure this is a separate session; if unsure, use a different browser to maintain logins to both the management and production accounts. I’ll log in to the IAM admin user of the production AWS account.
With the production AWS account logged in via a separate browser session, copy the account ID for the production AWS account from the account dropdown. Then, return to the browser session with the general account, which is now the management account of the organization. We’ll invite the production AWS account into this organization.
Click on "Add Account," then "Invite Account." Enter either the email address used while signing up or the account ID of the production account. I’ll enter the account ID. If you’re inviting an account you administer, no notes are needed. However, if the account is administered by someone else, you may include a message. After entering the email or account ID, scroll down and click "Send Invitation."
Depending on your AWS account, you might receive an error message about too many accounts within the organization. If so, log a support request to increase the number of allowed accounts. If no error message appears, the invite process has begun.
Next, accept the invite from the production AWS account. Go back to the tab with the general AWS account, move to the Organizations console, and click "Invitations" on the middle left. You should see an overview of all invitations related to the production AWS account. Click "Accept" to complete the process of joining the organization. Now, the production account is a member of the AWS organization.
To verify, return to the general account tab and refresh. You should now see two AWS accounts: the general and the production accounts. Next, I’ll demonstrate how to role switch into the production AWS account, now a member of the organization.
When adding an account to an organization, you can either invite an existing account or create a new one within the organization. If creating a new account, a role is automatically created for role switching. If inviting an existing account, you need to manually add this role.
To do this, switch to the browser or session where you're logged into the production AWS account. Click on the services search box, type IAM, and move to the IAM console to create IAM roles. Click on "Create Role," select "Another AWS Account," and enter the account ID of the general AWS account, which is now the management account.
Copy the account ID of the general AWS account into the account ID box, then click "Next." Attach the "AdministratorAccess" policy to this role. On the next screen, name the role "OrganizationAccountAccessRole" with uppercase O, A, A, and R, and note that "Organization" uses the U.S. spelling with a Z. Click "Create Role."
In the role details, select "Trust Relationships" to verify that the role trusts the account ID of your general AWS account, which allows identities within the general account to assume this role.
Next, switch back to the general AWS account. Copy the account ID for the production AWS account because we will switch into it using role switch. In the AWS console, click on the account dropdown and select "Switch Roles." Paste the production account ID into the account ID box, and enter the role name "OrganizationAccountAccessRole" with uppercase O, A, A, and R.
For the display name, use "Prod" for production, and pick red as the color for easy identification. Click "Switch Role" to switch into the production AWS account. You’ll see the red color and "Prod" display name indicating a successful switch.
To switch back to the general account, click on "Switch Back." In the role history section, you can see shortcuts for switching roles. Click "Prod" to switch back to the production AWS account using temporary credentials granted by the assumed role.
Now, let’s create the development AWS account within our organization. Close the browser window or tab with the production AWS account as it’s no longer needed. Return to the AWS Organizations console, click "Add Account," and then "Create Account." Name the account "Development," following the same naming structure used for general and production accounts.
Provide a unique email address for the development AWS account. Use the same email structure you’ve used for previous accounts, such as "Adrian+TrainingAWSDevelopment" for consistency.
In the box for the role name, use "OrganizationAccountAccessRole" with uppercase O, A, A, and R, and the U.S. spelling. Click "Create" to create the development account. If you encounter an error about too many accounts, you might need to request an increase in the account limit.
The development account will be created within the organization, and this may take a few minutes. Refresh to see the new development account with its own account ID. Copy this account ID for the switch role dialogue.
Click on the account dropdown, select "Switch Roles," and enter the new development account ID. For the role name, use "OrganizationAccountAccessRole" and for the display name, use "Dev" for development with yellow as the color for distinction. Click "Switch Role" to switch into the development AWS account.
In the AWS console, you’ll see the new development account. You can switch directly between the general, production, and development accounts using role switch shortcuts. AWS automatically created the "OrganizationAccountAccessRole" in the development account.
In summary, you now have three AWS accounts: the general AWS account (management account), the production AWS account, and the development AWS account. This completes the account structure for the course. Complete this video, and I'll look forward to seeing you in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome to this lesson, where I'll be introducing AWS Organizations. AWS Organizations is a product that allows larger businesses to manage multiple AWS accounts in a cost-effective way with little to no management overhead.
Organizations is a product that has evolved significantly over the past few years, and it's worthwhile to step through that evolution to understand all of its different features. We’ve got a lot to cover, so let's jump in and get started.
Without AWS Organizations, many large businesses would face the challenge of managing numerous AWS accounts. In the example onscreen, there are four accounts, but I've worked with some larger enterprises with hundreds of accounts and have heard of even more. Without AWS Organizations, each of these accounts would have its own pool of IAM users as well as separate payment methods. Beyond 5 to 10 accounts, this setup becomes unwieldy very quickly.
AWS Organizations is a simple product to understand. You start with a single AWS account, which I'll refer to as a standard AWS account from now on. A standard AWS account is an AWS account that is not part of an organization. Using this standard AWS account, you create an AWS Organization.
It’s important to understand that the organization isn't created within this account; you're simply using the account to create the organization. This standard AWS account that you use to create the organization then becomes the Management Account for the organization. The Management Account used to be called the Master Account. If you hear either of these terms—Management Account or Master Account—just know that they mean the same thing.
This is a key point to understand with regards to AWS Organizations because the Management Account is special for two reasons, which I’ll explain in this lesson. For now, I’ll add a crown to this account to indicate that it’s the Management Account and to help you distinguish it from other AWS accounts.
Using this Management Account, you can invite other existing standard AWS accounts into the organization. Since these are existing accounts, they need to approve the invites to join the organization. Once they do, those Standard Accounts will become part of the AWS Organization.
When standard AWS accounts join an AWS Organization, they change from being Standard Accounts to being Member Accounts of that organization. Organizations have one and only one Management or Master Account and then zero or more Member Accounts.
You can create a structure of AWS accounts within an organization, which is useful if you have many accounts and need to group them by business units, functions, or even the development stage of an application. The structure within AWS Organizations is hierarchical, forming an inverted tree.
At the top of this tree is the root container of the organization. This is just a container for AWS accounts at the top of the organizational structure. Don’t confuse this with the Account Root User, which is the admin user of an AWS account. The organizational root is just a container within an AWS Organization, which can contain AWS accounts, including Member Accounts or the Management Account.
As well as containing accounts, the organizational root can also contain other containers, known as organizational units (OUs). These organizational units can contain AWS accounts, Member Accounts, or the Management Account, or they can contain other organizational units, allowing you to build a complex nested AWS account structure within Organizations.
Again, please don’t confuse the organizational root with the AWS Account Root User. The AWS Account Root User is specific to each AWS account and provides full permissions over that account. The root of an AWS Organization is simply a container for AWS accounts and organizational units and is the top level of the hierarchical structure within AWS Organizations.
One important feature of AWS Organizations is consolidated billing. With the example onscreen now, there are four AWS accounts, each with its own billing information. Once these accounts are added to an AWS Organization, the individual billing methods for the Member Accounts are removed. Instead, the Member Accounts pass their billing through to the Management Account of the organization.
In the context of consolidated billing, you might see the term Payer Account. The Payer Account is the AWS account that contains the payment method for the organization. So, if you see Master Account, Management Account, or Payer Account, know that within AWS Organizations, they all refer to the same thing: the account used to create the organization and the account that contains the payment method for all accounts within the AWS Organization.
Using consolidated billing within an AWS Organization means you receive a single monthly bill contained within the Management Account. This bill covers the Management Account and all Member Accounts of the organization. One bill contains all the billable usage for all accounts within the AWS Organization, removing a significant amount of financial admin overhead for larger businesses. This alone would be worth creating an organization for most larger enterprises.
But it gets better. With AWS, certain services become cheaper the more you use them, and for certain services, you can pay in advance for cheaper rates. When using Organizations, these benefits are pooled, allowing the organization to benefit as a whole from the spending of each AWS account within it.
AWS Organizations also features a service called Service Control Policies (SCPs), which allows you to restrict what AWS accounts within the organization can do. These are important, and I’ll cover them in their own dedicated lesson, which is coming up soon. I wanted to mention them now as a feature of AWS Organizations.
Before we go through a demo where we'll create an AWS Organization and set up the final account structure for this course, I want to cover two other concepts. You can invite existing accounts into an organization, but you can also create new accounts directly within it. All you need is a valid, unique email address for the new account, and AWS will handle the rest. Creating accounts directly within the organization avoids the invite process required for existing accounts.
Using an AWS Organization changes what is best practice in terms of user logins and permissions. With Organizations, you don’t need to have IAM Users inside every single AWS account. Instead, IAM roles can be used to allow IAM Users to access other AWS accounts. We’ll implement this in the following demo lesson. Best practice is to have a single account for logging in, which I’ve shown in this diagram as the Management Account of the organization. Larger enterprises might keep the Management Account clean and have a separate account dedicated to handling logins.
Both approaches are fine, but be aware that the architectural pattern is to have a single AWS account that contains all identities for logging in. Larger enterprises might also have their own existing identity system and may use Identity Federation to access this single identity account. You can either use internal AWS identities with IAM or configure AWS to allow Identity Federation so that your on-premises identities can access this designated login account.
From there, we can use this account with these identities and utilize a feature called role switching. Role switching allows users to switch roles from this account into other Member Accounts of the organization. This process assumes roles in these other AWS accounts. It can be done from the console UI, hiding much of the technical complexity, but it’s important to understand how it works. Essentially, you either log in directly to this login account using IAM identities or use Identity Federation to gain access to it, and then role switch into other accounts within the organization.
I’ll discuss this in-depth as we progress through the course. The next lesson is a demo where you’ll implement this yourself and create the final AWS account structure for the remainder of the course.
Okay, so at this point, it's time for a demo. As I mentioned, you'll be creating the account structure you'll use for the rest of the course. At the start, I demoed creating AWS accounts, including a general AWS account and a production AWS account. In the next lesson, I’ll walk you through creating an AWS Organization using this general account, which will become the Management Account for the AWS Organization. Then, you'll invite the existing production account into the organization, making it a Member Account. Finally, you'll create a new account within the organization, which will be the Development Account.
I’m excited for this, and it’s going to be both fun and useful for the exam. So, go ahead and finish this video, and when you're ready, I look forward to you joining me in the next lesson, which will be a demo.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to continue immediately from the last one by discussing when and where you might use IAM roles. By talking through some good scenarios for using roles, I want to make sure that you're comfortable with selecting these types of situations where you would choose to use an IAM role and where you wouldn't, because that's essential for real-world AWS usage and for answering exam questions correctly.
So let's get started.
One of the most common uses of roles within the same AWS account is for AWS services themselves. AWS services operate on your behalf and need access rights to perform certain actions. An example of this is AWS Lambda. Now, I know I haven't covered Lambda yet, but it's a function as a service product. What this means is that you give Lambda some code and create a Lambda function. This function, when it runs, might do things like start and stop EC2 instances, perform backups, or run real-time data processing. What it does exactly isn't all that relevant for this lesson. The key thing, though, is that a Lambda function, as with most AWS things, has no permissions by default. A Lambda function is not an AWS identity. It's a component of a service, and so it needs some way of getting permissions to do things when it runs. Running a Lambda function is known as a function invocation or a function execution using Lambda terminology.
So anything that's not an AWS identity, this might be an application or a script running on a piece of compute hardware somewhere, needs to be given permissions on AWS using access keys. Rather than hard-coding some access keys into your Lambda function, there's actually a better way. To provide these permissions, we can create an IAM role known as a Lambda execution role. This execution role has a trust policy which trusts the Lambda service. This means that Lambda is allowed to assume that role whenever a function is executed. This role has a permissions policy which grants access to AWS products and services.
When the function runs, it uses the sts:AssumeRole operation, and then the Secure Token Service generates temporary security credentials. These temporary credentials are used by the runtime environment in which the Lambda function runs to access AWS resources based on the permissions the role’s permissions policy has. The code is running in a runtime environment, and it's the runtime environment that assumes the role. The runtime environment gets these temporary security credentials, and then the whole environment, which the code is running inside, can use these credentials to access AWS resources.
So why would you use a role for this? What makes this scenario perfect for using a role? Well, if we didn't use a role, you would need to hard-code permissions into the Lambda function by explicitly providing access keys for that function to use. Where possible, you should avoid doing that because, A, it's a security risk, and B, it causes problems if you ever need to change or rotate those access keys. It's always better for AWS products and services, where possible, to use a role, because when a role is assumed, it provides a temporary set of credentials with enough time to complete a task, and then these are discarded.
For a given Lambda function, you might have one copy running at once, zero copies, 50 copies, a hundred copies, or even more. Because you can't determine this number, because it's unknown, if you remember my rule from the previous lesson, if you don't know the number of principals, if it's multiple or if it's an uncertain number, then it suggests a role might be the most ideal identity to use. In this case, it is the ideal way of providing Lambda with these credentials to use a role and allow it to get these temporary credentials. It's always the preferred option when using AWS services to do something on your behalf; use a role because you don't need to provide any static credentials.
Okay, so let's move on to the next scenario.
Another situation where roles are useful is emergency or out-of-the-usual situations. Here’s a familiar scenario that you might find in a workplace. This is Wayne, and Wayne works in a business's service desk team. This team is given read-only access to a customer's AWS account so that they can keep an eye on performance. The idea is that anything more risky than this read-only level of access is handled by a more senior technical team. We don't want to give Wayne's team long-term permissions to do anything more destructive than this read-only access, but there are always going to be situations which occur when we least want them, normally 3:00 a.m. on a Sunday morning, when a customer might call with an urgent issue where they need Wayne's help to maybe stop or start an instance, or maybe even terminate an EC2 instance and recreate it.
So 99% of the time, Wayne and his team are happy with this read-only access, but there are situations when he needs more. This is a break-glass style situation, which is named after this. The idea of break glass in the physical world is that there is a key for something behind glass. It might be a key for a room that a certain team doesn't normally have access to, maybe it’s a safe or a filing cabinet. Whatever it is, the glass provides a barrier, meaning that when people break it, they really mean to break it. It’s a confirmation step. So if you break a piece of glass to get a key to do something, there needs to be an intention behind it. Anyone can break the glass and retrieve the key, but having the glass results in the action only happening when it's really needed. At other times, whatever the key is for remains locked. And you can also tell when it’s been used and when it hasn’t.
A role can perform the same thing inside an AWS account. Wayne can assume an emergency role when absolutely required. When he does, he'll gain additional permissions based on the role's permissions policy. For a short time, Wayne will, in effect, become the role. This access will be logged and Wayne will know to only use the role under exceptional circumstances. Wayne’s normal permissions can remain at read-only, which protects him and the customer, but he can obtain more if required when it’s really needed. So that’s another situation where a role might be a great solution.
Another scenario when roles come in handy is when you're adding AWS into an existing corporate environment. You might have an existing physical network and an existing provider of identities, known as an identity provider, that your staff use to log into various systems. For the sake of this example, let’s just say that it's Microsoft Active Directory. In this scenario, you might want to offer your staff single sign-on, known as SSO, allowing them to use their existing logins to access AWS. Or you might have upwards of 5,000 accounts. Remember, there’s the 5,000 IAM user limit. So for a corporation with more than 5,000 staff, you can’t offer each of them an IAM user. That is beyond the capabilities of IAM.
Roles are often used when you want to reuse your existing identities for use within AWS. Why? Because external accounts can’t be used directly. You can’t access an S3 bucket directly using an Active Directory account. Remember this fact. External accounts or external identities cannot be used directly to access AWS resources. You can’t directly use Facebook, Twitter, or Google identities to interact with AWS. There is a separate process which allows you to use these external identities, which I’ll be talking about later in the course.
Architecturally, what happens is you allow an IAM role inside your AWS account to be assumed by one of the external identities, which is in Active Directory in this case. When the role is assumed, temporary credentials are generated and these are used to access the resources. There are ways that this is hidden behind the console UI so that it appears seamless, but that's what happens behind the scenes. I'll be covering this in much more detail later in the course when I talk about identity federation, but I wanted to introduce it here because it is one of the major use cases for IAM roles.
Now, why roles are so important when an existing ID provider such as Active Directory is involved is that, remember, there is this 5,000 IAM user limit in an account. So if your business has more than 5,000 accounts, then you can’t simply create an IAM user for each of those accounts, even if you wanted to. 5,000 is a hard limit. It can't be changed. Even if you could create more than 5,000 IAM users, would you actually want to manage 5,000 extra accounts? Using a role in this way, so giving permissions to an external identity provider and allowing external identities to assume this role, is called ID Federation. It means you have a small number of roles to manage and external identities can use these roles to access your AWS resources.
Another common situation where you might use roles is if you're designing the architecture for a popular mobile application. Maybe it's a ride-sharing application which has millions of users. The application needs to store and retrieve data from a database product in AWS, such as DynamoDB. Now, I've already explained two very important but related concepts on the previous screen. Firstly, that when you interact with AWS resources, you need to use an AWS identity. And then secondly, that there’s this 5,000 IAM user limit per account. So designing an application with this many users which needs access to AWS resources, if you could only use IAM users or identities in AWS, it would be a problem because of this 5,000 user limit. It’s a hard limit and it can’t be raised.
Now, this is a problem which can be fixed with a process called Web Identity Federation, which uses IAM roles. Most mobile applications that you’ve used, you might have noticed they allow you to sign in using a web identity. This might be Twitter, Facebook, Google, and potentially many others. If we utilize this architecture for our web application, we can trust these identities and allow these identities to assume an IAM role. This is based on that role’s trust policy. So they can assume that role, gain access to temporary security credentials, and use those credentials to access AWS resources, such as DynamoDB. This is a form of Web Identity Federation, and I'll be covering it in much more detail later in the course.
The use of roles in this situation has many advantages. First, there are no AWS credentials stored in the application, which makes it a much more preferred option from a security point of view. If an application is exploited for whatever reason, there’s no chance of credentials being leaked, and it uses an IAM role which you can directly control from your AWS account. Secondly, it makes use of existing accounts that your customers probably already have, so they don't need yet another account to access your service. And lastly, it can scale to hundreds of millions of users and beyond. It means you don’t need to worry about the 5,000 user IAM limit. This is really important for the exam. There are very often questions on how you can architect solutions which will work for mobile applications. Using ID Federation, so using IAM roles, is how you can accomplish that. And again, I'll be providing much more information on ID Federation later in the course.
Now, one scenario I want to cover before we finish up this lesson is cross-account access. In an upcoming lesson, I’ll be introducing AWS Organizations and you will get to see this type of usage in practice. It’s actually how we work in a multi-account environment. Picture the scenario that's on screen now: two AWS accounts, yours and a partner account. Let’s say your partner organization offers an application which processes scientific data and they want you to store any data inside an S3 bucket that’s in their account. Your account has thousands of identities, and the partner IT team doesn’t want to create IAM users in their account for all of your staff. In this situation, the best approach is to use a role in the partner account. Your users can assume that role, get temporary security credentials, and use those to upload objects. Because the IAM role in the partner account is an identity in that account, using that role means that any objects that you upload to that bucket are owned by the partner account. So it’s a very simple way of handling permissions when operating between accounts.
Roles can be used cross-account to give access to individual resources like S3 in the onscreen example, or you can use roles to give access to a whole account. You’ll see this in the upcoming AWS Organization demo lesson. In that lesson, we’re going to configure it so a role in all of the different AWS accounts that we’ll be using for this course can be assumed from the general account. It means you won’t need to log in to all of these different AWS accounts. It makes multi-account management really simple.
I hope by this point you start to get a feel for when roles are used. Even if you’re a little vague, you will learn more as you go through the course. For now, just a basic understanding is enough. Roles are difficult to understand at first, so you’re doing well if you’re anything but confused at this point. I promise you, as we go through the course and you get more experience, it will become second nature.
So at this point, that’s everything I wanted to cover. Thanks for watching. Go ahead and complete this video, and when you're ready, join me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
Over the next two lessons, I'll be covering a topic which is usually one of the most difficult identity-related topics in AWS to understand, and that's IAM roles. In this lesson, I'll step through how roles work, their architecture, and how you technically use a role. In the following lesson, I'll compare roles to IAM users and go into a little bit more detail on when you generally use a role, so some good scenarios which fit using an IAM role. My recommendation is that you watch both these lessons back to back in order to fully understand IAM roles.
So let's get started.
A role is one type of identity which exists inside an AWS account. The other type, which we've already covered, are IAM users. Remember the term "principal" that I introduced in the previous few lessons? This is a physical person, application, device, or process which wants to authenticate with AWS. We defined authentication as proving to AWS that you are who you say you are. If you authenticate, and if you are authorized, you can then access one or more resources.
I also previously mentioned that an IAM user is generally designed for situations where a single principal uses that IAM user. I’ve talked about the way that I decide if something should use an IAM user: if I can imagine a single thing—one person or one application—who uses an identity, then generally under most circumstances, I'd select to use an IAM user.
IAM roles are also identities, but they're used much differently than IAM users. A role is generally best suited to be used by an unknown number or multiple principals, not just one. This might be multiple AWS users inside the same AWS account, or it could be humans, applications, or services inside or outside of your AWS account who make use of that role. If you can't identify the number of principals which use an identity, then it could be a candidate for an IAM role. Or if you have more than 5,000 principals, because of the number limit for IAM users, it could also be a candidate for an IAM role.
Roles are also something which is generally used on a temporary basis. Something becomes that role for a short period of time and then stops. The role isn't something that represents you. A role is something which represents a level of access inside an AWS account. It's a thing that can be used, short term, by other identities. These identities assume the role for a short time, they become that role, they use the permissions that that role has, and then they stop being that role. It’s not like an IAM user, where you login and it’s a representation of you, long term. With a role, you essentially borrow the permissions for a short period of time.
I want to make a point of stressing that distinction. If you're an external identity—like a mobile application, maybe—and you assume a role inside my AWS account, then you become that role and you gain access to any access rights that that role has for a short time. You essentially become an identity in my account for a short period of time.
Now, this is the point where most people get a bit confused, and I was no different when I first learned about roles. What's the difference between logging into a user and assuming a role? In both cases, you get the access rights that that identity has.
Before we get to the end of this pair of lessons, so this one and the next, I think it's gonna make a little bit more sense, and definitely, as you go through the course and get some practical exposure to roles, I know it's gonna become second nature.
IAM users can have identity permissions policies attached to them, either inline JSON or via attached managed policies. We know now that these control what permissions the identity gets inside AWS. So whether these policies are inline or managed, they're properly referred to as permissions policies—policies which grant, so allow or deny, permissions to whatever they’re associated with.
IAM roles have two types of policies which can be attached: the trust policy and the permissions policy. The trust policy controls which identities can assume that role. With the onscreen example, identity A is allowed to assume the role because identity A is allowed in the trust policy. Identity B is denied because that identity is not specified as being allowed to assume the role in the trust policy.
The trust policy can reference different things. It can reference identities in the same account, so other IAM users, other roles, and even AWS services such as EC2. A trust policy can also reference identities in other AWS accounts. As you'll learn later in the course, it can even allow anonymous usage of that role and other types of identities, such as Facebook, Twitter, and Google.
If a role gets assumed by something which is allowed to assume it, then AWS generates temporary security credentials and these are made available to the identity which assumed the role. Temporary credentials are very much like access keys, which I covered earlier in the course, but instead of being long-term, they're time-limited. They only work for a certain period of time before they expire. Once they expire, the identity will need to renew them by reassuming the role, and at that point, new credentials are generated and given to the identity again which assumed that role.
These temporary credentials will be able to access whatever AWS resources are specified within the permissions policy. Every time the temporary credentials are used, the access is checked against this permissions policy. If you change the permissions policy, the permissions of those temporary credentials also change.
Roles are real identities and, just like IAM users, roles can be referenced within resource policies. So if a role can access an S3 bucket because a resource policy allows it or because the role permissions policy allows it, then anything which successfully assumes the role can also access that resource.
You’ll get a chance to use roles later in this section when we talk about AWS Organizations. We’re going to take all the AWS accounts that we’ve created so far and join them into a single organization, which is AWS’s multi-account management product. Roles are used within AWS Organizations to allow us to log in to one account in the organization and access different accounts without having to log in again. They become really useful when managing a large number of accounts.
When you assume a role, temporary credentials are generated by an AWS service called STS, or the Secure Token Service. This is the operation that's used to assume the role and get the credentials, so sts .
In this lesson, I focused on the technical aspect of roles—mainly how they work. I’ve talked about the trust policy, the permissions policy, and how, when you assume a role, you get temporary security credentials. In the next lesson, I want to step through some example scenarios of where roles are used, and I hope by the end of that, you’re gonna be clearer on when you should and shouldn’t use roles.
So go ahead, finish up this video, and when you’re ready, you can join me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and welcome to this demo of the functionality provided by IAM Groups.
What we're going to do in this demo is use the same architecture that we had in the IAM users demo—the SALLI user and those two S3 buckets—but we’re going to migrate the permissions that the SALLI user has from the user to a group that SALLI is a member of.
Before we get started, just make sure that you are logged in as the IAM admin user of the general AWS account. As always, you’ll need to have the Northern Virginia region selected.
Attached to this video is a demo files link that will download all of the files you’re going to use throughout the demo. To save some time, go ahead and click on that link and start the file downloading. Once it’s finished, go ahead and extract it; it will create a folder containing all of the files you’ll need as you move through the demo.
You should have deleted all of the infrastructure that you used in the previous demo lesson. So at this point, we need to go ahead and recreate it. To do that, attached to this lesson is a one-click deployment link. So go ahead and click that link. Everything is pre-populated, so you need to make sure that you put in a suitable password that doesn’t breach any password policy on your account. I’ve included a suitable default password with some substitutions, so that should be okay for all common password policies.
Scroll down to the bottom, click on the capabilities checkbox, and then create the stack. That’ll take a few moments to create, so I’m going to pause the video and resume it once that stack creation has completed.
Okay, so that’s created now. Click on Services and open the S3 console in a new tab. This can be a normal tab. Go to the Cat Pics bucket, click Upload, add file, locate the demo files folder that you downloaded and extracted earlier. Inside that folder should be a folder called Cat Pics. Go in there and then select merlin.jpg. Click on Open and Upload. Wait for that to finish.
Once it’s finished, go back to the console, go to Animal Pics, click Upload again, add files. This time, inside the Animal Pics folder, upload thaw.jpg. Click Upload. Once that’s done, go back to CloudFormation, click on Resources, and click on the Sally user. Inside the Sally user, click on Add Permissions, Attach Policies Directly, select the "Allow all S3 except cats" policy, click on Next, and then Add Permissions.
So that brings us to the point where we were in the IAM users demo lesson. That’s the infrastructure set back up in exactly the same way as we left the IAM users demo. Now we can click on Dashboard. You’ll need to copy the IAM signing users link for the general account. Copy that into your clipboard.
You’re going to need a separate browser, ideally, a fully separate browser. Alternatively, you can use a private browsing tab in your current browser, but it’s just easier to understand probably for you at this point in your learning if you have a separate browser window. I’m going to use an isolated tab because it’s easier for me to show you.
You’ll need to paste in this IAM URL because now we’re going to sign into this account using the Sally user. Go back to CloudFormation, click on Outputs, and you’ll need the Sally username. Copy that into your clipboard. Go back to this separate browser window and paste that in. Then, back to CloudFormation, go to the Parameters tab and get the password for the Sally user. Enter the password that you chose for Sally when you created the stack.
Then move across to the S3 console and just verify that the Sally user has access to both of these buckets. The easiest way of doing that is to open both of these animal pictures. We’ll start with Thor. Thor’s a big doggo, so it might take some time for him to load in. There we go, he’s loaded in. And the Cat Pics bucket. We get access denied because remember, Sally doesn’t have access to the Cat Pics bucket. That’s as intended.
Now we’ll go back to our other browser window—the one where we logged into the general account as the IAM admin user. This is where we’re going to make the modifications to the permissions. We’re going to change the permissions over to using a group rather than directly on the Sally user.
Click on the Resources tab first and select Sally to move across to the Sally user. Note how Sally currently has this managed policy directly attached to her user. Step one is to remove that. So remove this managed policy from Sally. Detach it. This now means that Sally has no permissions on S3. If we go back to the separate browser window where we’ve got Sally logged in and then hit refresh, we see she doesn’t have any permissions now on S3.
Now back to the other browser, back to the one where we logged in as IAM admin, click on User Groups. We’re going to create a Developers group. Click on Create New Group and call it Developers. That’s the group name. Then, down at the bottom here, this is where we can attach a managed policy to this group. We’re going to attach the same managed policy that Sally had previously directly on her user—Allow all S3 except cats.
Type "allow" into the filter box and press Enter. Then check the box to select this managed policy. We could also directly at this stage add users to this group, but we’re not going to do that. We’re going to do that as a separate process. So click on 'Create Group'.
So that’s the Developers group created. Notice how there are not that many steps to create a group, simply because it doesn’t offer that much in the way of functionality. Open up the group. The only options you see here are 'User Membership' and any attached permissions. Now, as with a user, you can attach inline policies or managed policies, and we’ve got the managed policy.
What we’re going to do next is click on Users and then Add Users to Group. We’re going to select the Sally IAM user and click on Add User. Now our IAM user Sally is a member of the Developers group, and the Developers group has this attached managed policy that allows them to access everything on S3 except the Cat Pics bucket.
Now if I move back to my other browser window where I’ve got the Sally user logged in and then refresh, now that the Sally user has been added to that group, we’ve got permissions again over S3. If I try to access the Cat Pics bucket, I won’t be able to because that managed policy that the Developers team has doesn’t include access for this. But if I open the Animal Pics bucket and open Thor again—Thor’s a big doggo, so it’ll take a couple of seconds—it will load in that picture absolutely fine.
So there we go, there’s Thor. That’s pretty much everything I wanted to demonstrate in this lesson. It’s been a nice, quick demo lesson. All we’ve done is create a new group called Developers, added Sally to this Developers group, removed the managed policy giving access to S3 from Sally directly, and added it to the Developers group that she’s now a member of. Note that no matter whether the policy is attached to Sally directly or attached to a group that Sally is a member of, she still gets those permissions.
That’s everything I wanted to cover in this demo lesson. So before we finish up, let’s just tidy up our account. Go to Developers and then detach this managed policy from the Developers group. Detach it, then go to Groups and delete the Developers group because it wasn’t created as part of the CloudFormation template.
Then, as the IAM admin user, open up the S3 console. We need to empty both of these buckets. Select Cat Pics, click on Empty. You’ll need to type or copy and paste 'Permanently Delete' into that box and confirm the deletion. Click Exit. Then select the Animal Pics bucket and do the same process. Copy and paste 'Permanently Delete' and confirm by clicking on Empty and then Exit.
Now that we’ve done that, we should have no problems opening up CloudFormation, selecting the IAM stack, and then hitting Delete. Note if you do have any errors deleting this stack, just go into the stack, select Events, and see what the status reason is for any of those deletion problems. It should be fairly obvious if it can’t delete the stack because it can’t delete one or more resources, and it will give you the reason why.
That being said, at this point, assume the stack deletions worked successfully, and we’ve cleaned up our account. That’s everything I wanted to cover in this demo lesson. Go ahead, complete this video, and when you’re ready, I’ll see you in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this lesson, I want to briefly cover IAM groups, so let's get started.
IAM groups, simply put, are containers for IAM users. They exist to make organizing large sets of IAM users easier. You can't log in to IAM groups, and IAM groups have no credentials of their own. The exam might try to trick you on this one, so it's definitely important that you remember you cannot log into a group. If a question or answer suggests logging into a group, it's just simply wrong. IAM groups have no credentials, and you cannot log into them. So they're used solely for organizing IAM users to make management of IAM users easier.
So let's look at a visual example. We've got an AWS account, and inside it we've got two groups: Developers and QA. In the Developers group, we've got Sally and Mike. In the QA group, we've got Nathalie and Sally. Now, the Sally user—so the Sally in Developers and the Sally in the QA group—that's the same IAM user. An IAM user can be a member of multiple IAM groups. So that's important to remember for the exam.
Groups give us two main benefits. First, they allow effective administration-style management of users. We can make groups that represent teams, projects, or any other functional groups inside a business and put IAM users into those groups. This helps us organize.
Now, the second benefit, which builds off the first, is that groups can actually have policies attached to them. This includes both inline policies and managed policies. In the example on the screen now, the Developers group has a policy attached, as does the QA group. There’s also nothing to stop IAM users, who are themselves within groups, from having their own inline or managed policies. This is the case with Sally.
When an IAM user such as Sally is added as a member of a group—let’s say the Developers group—that user gets the policies attached to that group. Sally gains the permissions of any policies attached to the Developers group and any other groups that that user is a member of. So Sally also gets the policies attached to the QA group, and Sally has any policies that she has directly.
With this example, Sally is a member of the Developers group, which has one policy attached, a member of the QA group with an additional policy attached, and she has her own policy. AWS merges all of those into a set of permissions. So effectively, she has three policies associated with her user: one directly, and one from each of the group memberships that her user has.
When you're thinking about the allow or deny permissions in policy statements for users that are in groups, you need to consider those which apply directly to the user and their group memberships. Collect all of the policy allows and denies that a user has directly and from their groups, and apply the same deny-allow-deny rule to them as a collection. Evaluating whether you're allowed or denied access to a resource doesn’t become any more complicated; it’s just that the source of those allows and denies can broaden when you have users that are in multiple IAM groups.
I mentioned last lesson that an IAM user can be a member of up to 10 groups and there is a 5,000 IAM user limit for an account. Neither of those are changeable; they are hard limits. There’s no effective limit for the number of users in a single IAM group, so you could have all 5,000 IAM users in an account as members of a single IAM group.
Another common area of trick questions in the exam is around the concept of an all-users group. There isn't actually a built-in all-users group inside IAM, so you don’t have a single group that contains all of the members of that account like you do with some other identity management solutions. In IAM, you could create a group and add all of the users in that account into the group, but you would need to create and manage it yourself. So that doesn’t exist natively.
Another really important limitation of groups is that you can’t have any nesting. You can’t have groups within groups. IAM groups contain users and IAM groups can have permissions attached. That’s it. There’s no nesting, and groups cannot be logged into; they don’t have any credentials.
Now, there is a limit of 300 groups per account, but this can be increased with a support ticket.
There’s also one more point that I want to make at this early stage in the course. This is something that many other courses tend to introduce later on or at a professional level, but it's important that you understand this from the very start. I'll show you later in the course how policies can be attached to resources, for example, S3 buckets. These policies, known as resource policies, can reference identities. For example, a bucket could have a policy associated with it that allows Sally access to that bucket. That’s a resource policy. It controls access to a specific resource and allows or denies identities to access that bucket.
It does this by referencing these identities using an ARN, or Amazon Resource Name. Users and IAM roles, which I'll be talking about later in the course, can be referenced in this way. So a policy on a resource can reference IAM users and IAM roles by using the ARN. A bucket could give access to one or more users or to one or more roles, but groups are not a true identity. They can’t be referenced as a principal in a policy. A resource policy cannot grant access to an IAM group. You can grant access to IAM users, and those users can be in groups, but a resource policy cannot grant access to an IAM group. It can’t be referred to in this way. You couldn’t have a resource policy on an S3 bucket and grant access to the Developers group and then expect all of the developers to access it. That’s not how groups work. Groups are just there to group up IAM users and allow permissions to be assigned to those groups, which the IAM users inherit.
So this is an important one to remember, whether you are answering an exam question that involves groups, users, and roles or resource policies, or whether you're implementing real-world solutions. It’s easy to overestimate the features that a group provides. Don’t fall into the trap of thinking that a group offers more functionality than it does. It’s simply a container for IAM users. That’s all it’s for. It can contain IAM users and have permissions associated with it; that’s it. You can’t log in to them and you can’t reference them from resource policies.
Okay, so that’s everything I wanted to cover in this lesson. Go ahead, complete the video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
In this demo lesson, we're going to explore IAM users. This is the first type of identity that we've covered in AWS. We'll use the knowledge gained from the IAM policy documents lesson to assign permissions to an IAM user in our AWS account.
To get started, you'll need to be logged in as the IAM admin user to the general AWS account and have the Northern Virginia region selected.
Attached to this lesson are two links. The first is a one-click deployment link, which will deploy the infrastructure needed for this demo. The second link will download the files required for this demo. Click the demo files link to start the download, and then click the one-click deployment link to begin the deployment.
Earlier in the course, you created an IAM user that you should be logged into this account with. I won't go through the process of creating an IAM user again. Instead, we'll use CloudFormation to apply a template that will create an IAM user named Sally, along with two S3 buckets for this demonstration and a managed policy. Enter a password for the Sally user as a parameter in the CloudFormation stack. Use a password that’s reasonably secure but memorable and typeable. This password must meet the password policy assigned to your AWS account, which typically requires a minimum length of eight characters and a mix of character types, including uppercase, lowercase, numbers, and certain special characters. It also cannot be identical to your AWS account name or email address.
After entering the password, scroll down, check the capabilities box, and click on create stack. Once the stack is created, switch to your text editor to review what the template does. It asks for a parameter (Sally's password) and creates several resources.
There are logical resources called 'catpix' and 'animalpix,' both of which are S3 buckets. Another logical resource is 'Sally,' an IAM user. This IAM user has a managed policy attached to it, which references an ARN for a managed policy that will be shown once the stack is complete. It also sets the login profile, including the password, and requires a password reset upon first login.
The managed policy created allows access to S3 but denies access to the CatPix S3 bucket. This setup is defined in the policy logical resource, which you’ll see once the stack is complete.
Returning to the AWS console, refresh the page. You should see the four created resources: the AnimalPix S3 bucket, the CatPix S3 bucket, the IAM Managed Policy, and the Sally IAM User.
CloudFormation generates resource names by taking the stack name ("iam"), the logical resource name defined in the template ("animal_pics," "cat_pics," and "sally"), and adding some randomness to ensure unique physical IDs.
Now, open the Sally IAM user by clicking the link under Resources. Note that Sally has an attached managed policy called IAM User Change Password, which allows her to change her password upon first login.
Go to Policies in the IAM console to see the managed policies inside the account. The IAM User Change Password policy is one of the AWS managed policies and allows Sally to change her password.
Next, click on Dashboard and ensure that you have the IAM users sign-in link on your clipboard. Open a private browser tab or a separate browser to avoid logging out of your IAM admin user. Use this separate tab to access the IAM sign-in page for the general AWS account.
Retrieve Sally’s username from CloudFormation by clicking on outputs and copying it. Paste this username into the IAM username box on the sign-in page, and enter the password you chose for Sally. Click on sign in.
After logging in as Sally, you’ll need to change the password. Enter the old password, then choose and confirm a new secure password. This is possible due to the managed policy assigned to Sally that allows her to change her password.
Once logged in, test Sally’s permissions by navigating to the EC2 console. You might encounter API errors or lack of permissions. Check the S3 console and note that you won't have permissions to list any S3 buckets, even though we know at least two were created. This demonstrates that IAM users initially have no permissions apart from changing their passwords.
Locate and extract the zip file downloaded from the demo files link. Inside the extracted folder, open the file named S3_FullAdminJSON. This JSON policy document grants full access to any S3 actions on any S3 resource.
Assign this as an inline policy to Sally by copying the JSON policy document and pasting it into the IAM console. Go to the IAM area, open the Sally user, go to the Permissions tab, and click Add Permissions, then Create Inline Policy. Select the JSON tab, delete any existing document, and paste in the JSON policy.
Review the policy, name it S3 Admin Inline, and click Create Policy. Sally will now have this S3 Admin Inline policy in addition to the IAM User Change Password managed policy.
Switch to the browser or tab logged in as Sally and refresh the page. You should now be able to see the S3 buckets. Upload a file to both the AnimalPix and CatPix buckets to verify permissions. For example, upload thor.jpg to AnimalPix and merlin.jpg to CatPix.
To ensure you can read from the CatPix bucket, click on merlin.jpg and select open. You should see the file, confirming that you have access.
Return to the browser logged in as the IAM admin user. Open the Sally user and delete the S3 Admin Inline policy. This will remove her access rights over S3.
In the other browser or tab logged in as Sally, refresh the page. You should see an access denied error for S3 actions on the CatPix bucket, while still having access to the AnimalPix bucket.
Finally, return to the IAM admin browser or tab. Click on Add Permissions for Sally and attach the managed policy created by the CloudFormation template, "allow all S3 except cats." This policy has two statements: one allowing all S3 actions and another explicitly denying access to the CatPix bucket.
Verify this by refreshing the page logged in as Sally. You should be able to interact with all S3 buckets except the CatPix bucket.
To conclude, this demo showed how to apply different types of policies to an IAM user, including inline and managed policies. We demonstrated how these policies affect effective permissions.
For cleanup, delete the managed policy attachment from Sally. In the S3 console, empty both the CatPix and AnimalPix buckets by typing "permanently delete" and clicking empty. Return to CloudFormation, select the IAM stack, and hit delete to clean up all resources created by this stack.
That covers everything for this demo. Complete this video, and when you're ready, join me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back.
And in this lesson, I want to finish my coverage of IAM users.
You already gained some exposure to IAM users earlier in the course. Remember, you created an IAM admin user in both your general and production AWS accounts. As well as creating these users, you secured them using MFA, and you attached an AWS managed policy to give this IAM user admin rights in both of those accounts.
So for now, I just want to build upon your knowledge of IAM users by adding some extra detail that you'll need for the exam. So let's get started.
Now, before I go into more detail, let's just establish a foundation. Let's use a definition. Simply put, IAM users are an identity used for anything requiring long-term AWS access. For example, humans, applications, or service accounts. If you need to give something access to your AWS account, and if you can picture one thing, one person or one application—so James from accounts, Mike from architecture, or Miles from development—99% of the time you would use an IAM user.
If you need to give an application access to your AWS account, for example, a backup application running on people's laptops, then each laptop generally would use an IAM user. If you have a need for a service account, generally a service account which needs to access AWS, then generally this will use an IAM user. If you can picture one thing, a named thing, then 99% of the time, the correct identity to select is an IAM user. And remember this because it will help in the exam.
IAM starts with a principal. And this is a word which represents an entity trying to access an AWS account. At this point, it's unidentified. Principals can be individual people, computers, services, or a group of any of those things. For a principal to be able to do anything, it needs to authenticate and be authorized. And that's the process that I want to step through now.
A principal, which in this example, is a person or an application, makes requests to IAM to interact with resources. Now, to be able to interact with resources, it needs to authenticate against an identity within IAM. An IAM user is an identity which can be used in this way.
Authentication is this first step. Authentication is a process where the principal on the left proves to IAM that it is an identity that it claims to be. So an example of this is that the principal on the left might claim to be Sally, and before it can use AWS, it needs to prove that it is indeed Sally. And it does this by authenticating.
Authentication for IAM users is done either using username and password or access keys. These are both examples of long-term credentials. Generally, username and passwords are used if a human is accessing AWS and accessing via the console UI. Access keys are used if it's an application, or as you experienced earlier in the course, if it's a human attempting to use the AWS Command Line tools.
Now, once a principal goes through the authentication process, the principal is now known as an authenticated identity. An authenticated identity has been able to prove to AWS that it is indeed the identity that it claims to be. So it needs to be able to prove that it's Sally. And to prove that it's Sally, it needs to provide Sally's username and password, or be able to use Sally's secret access key, which is a component of the access key set. If it can do that, then AWS will know that it is the identity that it claims to be, and so it can start interacting with AWS.
Once the principal becomes an authenticated identity, then AWS knows which policies apply to the identity. So in the previous lesson, I talked about policy documents, how they could have one or more statements, and if an identity attempted to access AWS resources, then AWS would know which statements apply to that identity. That's the process of authorization.
So once a principal becomes an authenticated identity, and once that authenticated identity tries to upload to an S3 bucket or terminate an EC2 instance, then AWS checks that that identity is authorized to do so. And that's the process of authorization. So they're two very distinct things. Authentication is how a principal can prove to IAM that it is the identity that it claims to be using username and password or access keys, and authorization is IAM checking the statements that apply to that identity and either allowing or denying that access.
Okay, let's move on to the next thing that I want to talk about, which is Amazon Resource Names, or ARNs. ARNs do one thing, and that's to uniquely identify resources within any AWS accounts. When you're working with resources, using the command line or APIs, you need a way to refer to these resources in an unambiguous way. ARNs allow you to refer to a single resource, if needed, or in some cases, a group of resources using wild cards.
Now, this is required because things can be named in a similar way. You might have an EC2 instance in your account with similar characteristics to one in my account, or you might have two instances in your account but in different regions with similar characteristics. ARNs can always identify single resources, whether they're individual resources in the same account or in different accounts.
Now, ARNs are used in IAM policies which are generally attached to identities, such as IAM users, and they have a defined format. Now, there are some slight differences depending on the service, but as you go through this course, you'll gain enough exposure to be able to confidently answer any exam questions that involve ARNs. So don't worry about memorizing the format at this stage, you will gain plenty of experience as we go.
These are two similar, yet very different ARNs. They both look to identify something related to the catgifs bucket. They specify the S3 service. They don't need to specify a region or an account because the naming of S3 is globally unique. If I use a bucket name, then nobody else can use that bucket name in any account worldwide.
The difference between these two ARNs is the forward slash star on the end at the second one. And this difference is one of the most common ways mistakes can be made inside policies. It trips up almost all architects or admins at one point or another. The top ARN references an actual bucket. If you wanted to allow or deny access to a bucket or any actions on that bucket, then you would use this ARN which refers to the bucket itself. But a bucket and objects in that bucket are not the same thing.
This ARN references anything in that bucket, but not the bucket itself. So by specifying forward slash star, that's a wild card that matches any keys in that bucket, so any object names in that bucket. This is really important. These two ARNs don't overlap. The top one refers to just the bucket and not the objects in the bucket. The bottom one refers to the objects in the bucket but not the bucket itself.
Now, some actions that you want to allow or deny in a policy operate at a bucket level or actually create buckets. And this would need something like the top ARN. Some actions work on objects, so it needs something similar to the bottom ARN. And you need to make sure that you use the right one. In some cases, creating a policy that allows a set of actions will need both. If you want to allow access to create a bucket and interact with objects in that bucket, then you would potentially need both of these ARNs in a policy.
ARNs are collections of fields split by a colon. And if you see a double colon, it means that nothing is between it. It doesn't need to be specified. So in this example, you'll see a number of double colons because you don't need to specify the region or account number for an S3 bucket because the bucket name is globally unique. A star can also be used, which is a wild card.
Now, keep in mind they're not the same thing. So not specifying a region and specifying star don't mean the same thing. You might use a star when you want to refer to all regions inside an AWS account. Maybe you want to give permissions to interact with EC2 in all regions, but you can't simply omit this. The only place you'll generally use the double colon is when something doesn't need to be specified, you'd use a star when you want to refer to a wild card collection of a set of things. So they're not the same thing. Keep that in mind, and I'll give you plenty of examples as we go through the course.
So the first field is the partition, and this is the partition that the resource is in. For standard AWS regions, the partition is AWS. If you have resources in other partitions, the partition is AWS-hyphen-partition name. This is almost never anything but AWS. But for example, if you do have resources in the China Beijing region, then this is AWS-cn.
The next part is service. And this is the service name space that identifies the AWS product. For example, S3, IAM, or RDS. The next field is region. So this is the region that the resource you're referring to resides in. Some ARNs do not require a region, so this might be omitted, and certain ARNs require wild card. And you'll gain exposure through the course as to what different services require for their ARNs.
The next field is the account ID. This is the account ID of the AWS account that owns the resource. So for example, 123456789012. So if you're referring to an EC2 instance in a certain account, you will have to specify the account number inside the ARN. Some resources don't require that, so this example is S3 because it is globally unique across every AWS account. You don't need to specify the account number.
And then at the end, we've got resource or resource type. And the content of this part of the ARN varies depending on the service. A resource identifier can be the name or ID of an object. For example, user forward slash Sally or instance forward slash and then the instance ID, or it can be a resource path. But again, I'm only introducing this at this point. You'll get plenty of exposure as you go through the course. I just want to give you this advanced knowledge so you know what to expect.
So let's quickly talk about an exam PowerUp. I tend not to include useless facts and figures in my course, but some of them are important. This is one such occasion.
Now first, you can only ever have 5,000 IAM users in a single account. IAM is a global service, so this is a per account limit, not per region. And second, an IAM user can be a member of 10 IAM groups. So that's a maximum. Now, both of these have design impacts. You need to be aware of that.
What it means is that if you have a system which requires more than 5,000 identities, then you can't use one IAM user for each identity. So this might be a limit for internet scale applications with millions of users, or it might be a limit for large organizations which have more than 5,000 staff, or it might be a limit when large organizations are merging together. If you have any scenario or a project with more than 5,000 identifiable users, so identities, then it's likely that IAM users are not the right identity to pick for that solution.
Now, there are solutions which fix this. We can use IAM roles or Identity Federation, and I'll be talking about both of those later in the course. But in summary, it means using your own existing identities rather than using IAM users. And I'll be covering the architecture and the implementation of this later in the course.
At this stage, I want you to take away one key fact, and that is this 5,000 user limit. If you are faced with an exam question which mentions more than 5,000 users, or talks about an application that's used on the internet which could have millions of users, and if you see an answer saying create an IAM user for every user of that application, that is the wrong answer. Generally with internet scale applications, or enterprise access or company mergers, you'll be using Federation or IAM roles. And I'll be talking about all of that later in the course.
Okay, so that's everything I wanted to cover in this lesson. So go ahead, complete the video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back. In this lesson, I want to start by covering an important aspect of how AWS handles security, specifically focusing on IAM policies.
IAM policies are a type of policy that gets attached to identities within AWS. As you've previously learned, identities include IAM users, IAM groups, and IAM roles. You’ll use IAM policies frequently, so it’s important to understand them for the exam and for designing and implementing solutions in AWS.
Policies, once you understand them, are actually quite simple. I’ll walk you through the components and give you an opportunity to experiment with them in your own AWS account. Understanding policies involves three main stages: first, understanding their architecture and how they work; second, gaining the ability to read and understand the policy; and finally, learning to write your own. For the exam, understanding their architecture and being able to read them is sufficient. Writing policies will come as you work through the course and gain more practical experience.
Let's jump in. An IAM identity policy, or IAM policy, is essentially a set of security statements for AWS. It grants or denies access to AWS products and features for any identity using that policy. Identity policies, also known as policy documents, are created using JSON. Familiarity with JSON is helpful, but if you're new to it, don’t worry—it just requires a bit more effort to learn.
This is an example of an identity policy document that you would use with a user, group, or role. At a high level, a policy document consists of one or more statements. Each statement is enclosed in curly braces and grants or denies permissions to AWS services.
When an identity attempts to access AWS resources, it must prove its identity through a process known as authentication. Once authenticated, AWS knows which policies apply to that identity, and each policy can contain multiple statements. AWS also knows which resources you’re trying to interact with and what actions you want to perform on those resources. AWS reviews all relevant statements one by one to determine the permissions for a given identity accessing a particular resource.
A statement consists of several parts. The first part is a statement ID, or SID, which is optional but helps identify the statement and its purpose. For example, "full access" or "DenyCatBucket" indicates what the statement does. Using these identifiers is considered best practice.
Every interaction with AWS involves two main elements: the resource and the actions attempted on that resource. For instance, if you’re interacting with an S3 bucket and trying to add an object, the statement will only apply if it matches both the action and the resource. The action part of a statement specifies one or more actions, which can be very specific or use wildcards (e.g., s3:* for all S3 operations). Similarly, resources can be specified individually or in lists, and wildcards can refer to all resources.
The final component is the effect, which is either "allow" or "deny." The effect determines what AWS does if the action and resource parts of the statement match the attempted operation. If the effect is "allow," access is granted; if it’s "deny," access is blocked. An explicit deny always takes precedence over an explicit allow. If neither applies, the default implicit deny prevails.
In scenarios where there are multiple policies or statements, AWS evaluates all applicable statements. If there’s an explicit deny, it overrides any explicit allows. If no explicit deny is present, an explicit allow will grant access, unless there’s an explicit deny.
Lastly, there are two main types of policies: inline policies and managed policies. Inline policies are directly attached to individual identities, making them isolated and cumbersome to manage for large numbers of users. Managed policies are created as separate objects and can be attached to multiple identities, making them more efficient and easier to manage. AWS provides managed policies, but you can also create and manage customer managed policies tailored to your specific needs.
Before concluding, you’ll have a chance to gain practical experience with these policies. For now, this introduction should give you a solid foundation. Complete the video, and I look forward to seeing you in the next lesson.
-