204 Matching Annotations
  1. Last 7 days
    1. Neither of the methods shown above are ideal in environments where you require several clusters or need them to be provisioned in a consistent way by multiple people.

      In this case, IaC is favored over using EKS directly or manually deploying on EC2

    2. Running a cluster directly on EC2 also gives you the choice of using any available Kubernetes distribution, such as Minikube, K3s, or standard Kubernetes as deployed by Kubeadm.
    3. EKS is popular because it’s so simple to configure and maintain. You don’t need to understand the details of how Kubernetes works or how Nodes are joined to your cluster and secured. The EKS service automates cluster management procedures, leaving you free to focus on your workloads. This simplicity can come at a cost, though: you could find EKS becomes in-flexible as you grow, and it might be challenging to migrate from if you switch to a different cloud provider.

      Why use EKS

    4. The EKS managed Kubernetes engine isn’t included in the free tier. You’ll always be billed $0.10 per hour for each cluster you create, in addition to the EC2 or Fargate costs associated with your Nodes. The basic EKS charge only covers the cost of running your managed control plane. Even if you don’t use EKS, you’ll still need to pay to run Kubernetes on AWS. The free tier gives you access to EC2 for 750 hours per month on a 12-month trial, but this is restricted to the t2.micro and t3.micro instance types. These only offer 1 GiB of RAM so they’re too small to run most Kubernetes distributions.

      Cost of EKS

    5. Some of the other benefits of Kubernetes on AWS include

      Benefits of using Kubernetes on AWS: - scalability - cost efficiency - high availability

  2. Apr 2024
    1. Lesson 3: When executing a lot of requests to S3, make sure to explicitly specify the AWS region.
    2. Lesson 2: Adding a random suffix to your bucket names can enhance security.
    3. Lesson 1: Anyone who knows the name of any of your S3 buckets can ramp up your AWS bill as they like.

      The author was charged over $1300 after two days of using an S3 bucket, because some OS tool stored a default bucket name in the config, which was the same as his bucket name.

      Luckily, after everything AWS made an exception and he did not have to pay the bill.

    1. To address the issues of CAS, Karpenter uses a different approach. Karpenter directly interacts with the EC2 Fleet API to manage EC2 instances, bypassing the need for autoscaling groups.


    2. The problem occurs when you want to move the pod to another node, in cases such as cluster rebalancing, spot interruptions, and other events. This is because the EBS volumes are zonal bound and can only be attached to EC2 instances within the zone they were originally provisioned in.This is a key limitation that CAS is not able to take into an account when provisioning a new node.

      Key limitation of CAS

    3. Since Karpenter can schedule nodes quicker, it will most often win this race and provide a new node for the pending workload. CAS will still attempt to create a new node, however will be slower and will most likely have to remove the node after some time, due to emptiness. This brings unnecessary costs to your cloud bill
    4. It’s worth mentioning that Cluster Autoscaler and Karpenter can co-exist within the same cluster.
  3. Feb 2024
    1. At a minimum, each ADR should define the context of the decision, the decision itself, and the consequences of the decision for the project and its deliverables

      ADR sections from the example: * Title * Status * Date * Context * Decision * Consequences * Compliance * Notes

  4. Jan 2024
    1. LocalStack is a cloud service emulator that runs AWS services solely on your laptop without connecting to a remote cloud provider .


  5. Nov 2023
    1. It should be noted that in France, regulations do not allow this market-based approach when reporting company level CO2e emissions : “The assessment of the impact of electricity consumption in the GHG emissions report is carried out on the basis of the average emission factor of the electrical network (…) The use of any other factor is prohibited. There is therefore no discrimination by [electricity] supplier to be established when collecting the data.” (Regulatory method V5-BEGES decree).

      Companies are barred from using market based approaches for reporting?

      How does it work for Amazon then?

  6. Sep 2023
    1. VPC Subnets belong to a Network ACL that determines if traffic is allowed / denied entry and exit to the ENTIRE subnet
  7. Aug 2023
    1. aws infra changes also emit events with payload, we can catch and enrich these events to make decisions in downstream services

  8. Jun 2023
  9. May 2023
    1. Amazon has a new set of services that include an LLM called Titan and corresponsing cloud/compute services, to roll your own chatbots etc.

  10. Mar 2023
    1. You can freely replace SageMaker services with other components as your project grows and potentially outgrows SageMaker.

  11. Jan 2023
  12. Oct 2022
    1. transferring data across Availability zones within the same region is also a good way to save money

      1) data transferring tip on AWS

    2. When you use a private IP address, you are charged less when compared to a public IP address or Elastic IP address.

      2) data transferring tip on AWS

    3. But when you transfer data from one Amazon region to another, AWS charges you for that. It depends on the AWS region you are and this is the real deciding factor. For example, if you are in the US West(Oregon) region, you have to shell out $0.080/GB whereas in Asia Pacific (Seoul) region it bumps up to $0.135/GB.

      Transferring data in AWS within separate regions is quite costly

    4. When you transfer data between Amazon EC2, Amazon Redshift, Amazon RDS, Amazon Network Interfaces, and Amazon Elasticache, you have to pay zero charges if they are within the same Availability Zone.

      Transferring data in AWS within the same AZ is free

    5. When you transfer data from the internet to AWS, it is free of charge. AWS services like EC2 instances, S3 storage, or RDS instances, when you transfer data from the Internet into these you don’t have to pay any charge for it. However, if you transfer data using Elastic IPv4 address or peered VPC using an IPv6 address you will be charged $0.01/gb whenever you transfer data into an EC2 instance. The real catch is when you transfer data out of any of the AWS services. This is where AWS charges you money depending on the area you have chosen and the amount of data you are transferring. Some regions have higher charges than others.

      Data transfer costs on AWS

    1. we made sure to implement fail safes at each stage of the migration to make sure we could fall back if something were to go wrong. It’s also why we tested on a small scale before proceeding with the rest of the migration.

      While planning a big migration, make sure to have a fall back plan

    2. We mirrored PostgreSQL shards storing cached_urls tables in CassandraWe switched service.prerender.io to Cloudflare load balancer to allow dynamic traffic distributionWe set up new EU private-recache serversWe keep performing stress tests to solve any performance issues

      Steps of phase 3 migration

    3. “The true hidden price for AWS is coming from the traffic cost, they sell a reasonably priced storage, and it’s even free to upload it. But when you get it out, you pay an enormous cost.

      AWS may be reasonably price, but moving data out will cost a lot (e.g. $0.080/GB in the US West, or $0.135/GB in the Asia Pacific)!

    4. In the last four weeks, we moved most of the cache workload from AWS S3 to our own Cassandra cluster.

      Moving from AWS s3 to an own Cassandra cluster

    5. After testing whether Prerender pages could be cached in both S3 and minio, we slowly diverted traffic away from AWS S3 and towards minio.

      Moving from AWS S3 towards minio

    6. Phase 1 mostly involved setting up the bare metal servers and testing the migration on a small and more manageable setting before scaling. This phase required minimal software adaptation, which we decided to run on KVM virtualization on Linux.

      Migration from AWS to on-prem started by: - setting bare metal servers - testing - adapting software to run on KVM virtualization on Linux

    7. The solution? Migrate the cached pages and traffic onto Prerender’s own internal servers and cut our reliance on AWS as quickly as possible.

      When the Prerender team moved from AWS to on-prem, they have cut the cost from $1,000,000 to $200,000, for the data storage and traffic cost

  13. Aug 2022
    1. A data lake is different, because it stores relational data from line of business applications, and non-relational data from mobile apps, IoT devices, and social media.

      A data lake vs a Data Warehouse (

    2. The data structure, and schema are defined in advance to optimize for fast SQL queries, where the results are typically used for operational reporting and analysis. Data is cleaned, enriched, and transformed so it can act as the “single source of truth”

      Data warehouse

  14. Jun 2022
  15. Nov 2021
    1. We implemented a bash script to be installed in the master node of the EMR cluster, and the script is scheduled to run every 5 minutes. The script monitors the clusters and sends a CUSTOM metric EMR-INUSE (0=inactive; 1=active) to CloudWatch every 5 minutes. If CloudWatch receives 0 (inactive) for some predefined set of data points, it triggers an alarm, which in turn executes an AWS Lambda function that terminates the cluster.

      Solution to terminate EMR cluster; however, right now EMR supports auto-termination policy out of the box

  16. Oct 2021
    1. So, while DELETE operations are free, LIST operations (to get a list of objects) are not free (~$.005 per 1000 requests, varying a bit by region).

      Deleting buckets on S3 is not free. If you use either Web Console or AWS CLI, it will execute the LIST call per 1000 objects

    1. few battle-hardened options, for instance: Airflow, a popular open-source workflow orchestrator; Argo, a newer orchestrator that runs natively on Kubernetes, and managed solutions such as Google Cloud Composer and AWS Step Functions.

      Current top orchestrators:

      • Airflow
      • Argo
      • Google Cloud Composer
      • AWS Step Functions
  17. Sep 2021
  18. Aug 2021
  19. Jun 2021
  20. Apr 2021
  21. Mar 2021
    1. Werner Vogels, Amazon CTO, notes that one of the lessons we have learned at Amazon is to expect the unexpected. He reminds us that failures are a given, and as a consequence it’s desirable to build systems that embrace failure as a natural occurrence. Coding around these failures is important, but undifferentiated, work that improves the integrity of the solution being delivered. However, it takes time away from investing in differentiating code.

      This is an annotation I made.

    2. When asked to define the role of the teacher, for example, Reggio educators do not begin in the way typical to

      This is an annotation I made.

    3. This is an annotation I made.

    4. This is an annotation I made.

    5. This is an annotation I made.

    1. Another application that demands extreme reliability is the configuration of foundational components from AWS, such as Network Load Balancers. When a customer makes a change to their Network Load Balancer, such as adding a new instance or container as a target, it is often critical and urgent. The customer might be experiencing a flash crowd and needs to add capacity quickly. Under the hood, Network Load Balancers run on AWS Hyperplane, an internal service that is embedded in the Amazon Elastic Compute Cloud (EC2) network. AWS Hyperplane could handle configuration changes by using a workflow. So, whenever a customer makes a change, the change is turned into an event and inserted into a workflow that pushes that change out to all of the AWS Hyperplane nodes that need it. They can then ingest the change.

      This article clearly describes the functionality about aws elb

  22. Feb 2021
  23. Jan 2021
    1. Zappos created models to predict customer apparel sizes, which are cached and exposed at runtime via microservices for use in recommendations.

      There is another company named Virtusize who is doing the same thing like size predicting or recommendation

    1. Note: If your DNS does not allow you to add “@” as the hostname, please try leaving this field blank when you enter the ProtonMail verification information.

      If you're using AWS Route53, the console will silently accept the @ for the host, but WILL BE INVALID. You will need to follow this guidance to complete DNS configuration for ProtonMail.

  24. Nov 2020
    1. The details of what goes into a policy vary for each service, depending on what actions the service makes available, what types of resources it contains, and so on.

      This means that some kinds of validation cannot be done on write. For example, I've been able to write Resource values that contain invalid characters.

  25. Sep 2020
  26. Aug 2020
  27. Jun 2020
    1. Serverless may have a confusing name and might have people believe that it is “server-less” but it is still an impressive architect with various benefits. From a business’ perspective, the best advantage of going serverless is reduced time-to-market. Others being, less operational costs, no infrastructural management and efficiency.


    1. The best all-around performer is AWS CloudFront, followed closely by GitHub Pages. Not only do they have the fastest response times (median), they’re also the most consistent. They are, however, closely followed by Google Cloud Storage. Interestingly, there is very little difference between a regional and multi-regional bucket. The only reason to pick a multi-regional bucket would be the additional uptime guarantee. Cloudflare didn’t perform as well I would’ve expected.

      Results of static webhosting benchmark (2020 May):

      1. AWS CloudFront
      2. GitHub Pages
      3. Google Cloud Storage
  28. May 2020
    1. Amazon Machine Learning Deprecated. Use SageMaker instead.

      Instead of Amazon Machine Learning use Amazon SageMaker

    1. My friends ask me if I think Google Cloud will catch up to its rivals. Not only do I think so — I’m positive five years down the road it will surpass them.

      GCP more popular than AWS in 2025?

    2. So if GCP is so much better, why so many more people use AWS?

      Why so many people use AWS:

      • they were first
      • aggressive expansion of product line
      • following the crows
      • fear of not getting a job based on GCP
      • fear that GCP may be abandoned by Google
    3. As I mentioned I think that AWS certainly offers a lot more features, configuration options and products than GCP does, and you may benefit from some of them. Also AWS releases products at a much faster speed.You can certainly do more with AWS, there is no contest here. If for example you need a truck with a server inside or a computer sent over to your office so you can dump your data inside and return it to Amazon, then AWS is for you. AWS also has more flexibility in terms of location of your data centres.

      Advantages of AWS over GCP:

      • a lot more features (but are they necessary for you?)
      • a lot more configuration options
      • a lot more products
      • releases products at a much faster speed
      • you can do simply more with AWS
      • offers AWS Snowmobile
      • more flexibility in terms of your data centres
    4. Both AWS and GCP are very secure and you will be okay as long as are not careless in your design. However GCP for me has an edge in the sense that everything is encrypted by default.

      Encryption is set to default in GCP

    5. I felt that performance was almost always better in GCP, for example copying from instances to buckets in GCP is INSANELY fast

      Performance wise GCP also seems to outbeat AWS

    6. AWS charges substantially more for their services than GCP does, but most people ignore the real high cost of using AWS, which is; expertise, time and manpower.

      AWS is more costly, requires more time and manpower over GCP

    7. GCP provides a smaller set of core primitives that are global and work well for lots of use cases. Pub/Sub is probably the best example I have for this. In AWS you have SQS, SNS, Amazon MQ, Kinesis Data Streams, Kinesis Data Firehose, DynamoDB Streams, and maybe another queueing service by the time you read this post. 2019 Update: Amazon has now released another streaming service: Amazon Managed Streaming Kafka.

      Pub/Sub of GCP might be enough to replace most (all?) of the following Amazon products: SQS, SNS, Amazon MQ, Kinesis Data Streams, Kinesis Data Firehose, DynamoDB Streams, Amazon Managed Streaming Kafka

    8. At the time of writing this, there are 169 AWS products compared to 90 in GCP.

      AWS has more products than GCP but that's not necessarily good since some even nearly duplicate

    9. Spinning an EKS cluster gives you essentially a brick. You have to spin your own nodes on the side and make sure they connect with the master, which a lot of work for you to do on top of the promise of “managed”

      Managing Kubernetes in AWS (EKS) also isn't as effective as in GCP or GKE

    10. You can forgive the documentation in AWS being a nightmare to navigate for being a mere reflection of the confusing mess that is trying to describe. Whenever you are trying to solve a simple problem far too often you end up drowning in reference pages, the experience is like asking for a glass of water and being hosed down with a fire hydrant.

      Great documentation is contextual, not referential (like AWS's)

    11. Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon’s retail site. He hired Larry Tesler, Apple’s Chief Scientist and probably the very most famous and respected human-computer interaction expert in the entire world, and then ignored every goddamn thing Larry said for three years until Larry finally — wisely — left the company. Larry would do these big usability studies and demonstrate beyond any shred of doubt that nobody can understand that frigging website, but Bezos just couldn’t let go of those pixels, all those millions of semantics-packed pixels on the landing page. They were like millions of his own precious children. So they’re all still there, and Larry is not.

      Case why AWS doesn't look as it supposed to be

    12. The AWS interface looks like it was designed by a lonesome alien living in an asteroid who once saw a documentary about humans clicking with a mouse. It is confusing, counterintuitive, messy and extremely overcrowded.


    13. After you login with your token you then need to create a script to give you a 12 hour session, and you need to do this every day, because there is no way to extend this.

      One of the complications when we want to use AWS CLI with 2FA (not a case of GCP)

    14. In GCP you have one master account/project that you can use to manage the rest of your projects, you log in with your company google account and then you can set permissions to any project however you want.

      Setting up account permission to the projects in GCP is far better than in AWS

    15. It’s not that AWS is harder to use than GCP, it’s that it is needlessly hard; a disjointed, sprawl of infrastructure primitives with poor cohesion between them.

      AWS management isn't as straightforward as the one of GCP

    1. A portfolio is a collection of products, together with configuration information. Portfolios help manage product configuration, and who can use specific products and how they can use them. With AWS Service Catalog, you can create a customized portfolio for each type of user in your organization and selectively grant access to the appropriate portfolio.
    1. Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.

      firehose is different

      • es
      • s3
      • redshift
    1. Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk,

    1. DynamoDB Streams enables solutions such as these, and many others. DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours.

      record db item changes

    1. AWS OpsWorks Stacks uses Chef cookbooks to handle tasks such as installing and configuring packages and deploying apps.
    1. Your Amazon Athena query performance improves if you convert your data into open source columnar formats, such as Apache Parquet

      s3 perfomance use columnar formats

    1. Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS.
    1. Amazon AppStream 2.0 is a fully managed application streaming service. You centrally manage your desktop applications on AppStream 2.0 and securely deliver them to any computer.

      fro streaming apps

    1. A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
    1. Endpoint policies are currently supported by CodeBuild, CodeCommit, ELB API, SQS, SNS, CloudWatch Logs, API Gateway, SageMaker notebooks, SageMaker API, SageMaker Runtime, Cloudwatch Events and Kinesis Firehose.
    1. Using VPC endpoint policies A VPC endpoint policy is an IAM resource policy that you attach to an endpoint when you create or modify the endpoint. If you do not attach a policy when you create an endpoint, we attach a default policy for you that allows full access to the service. If a service does not support endpoint policies, the endpoint allows full access to the service. An endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies). It is a separate policy for controlling access from the endpoint to the specified service.
    1. An interface VPC endpoint (interface endpoint) enables you to connect to services powered by AWS PrivateLink.

      let you connect to aws service in private vpc

    1. You can associate a health check with an alias record instead of or in addition to setting the value of Evaluate Target Health to Yes. However, it's generally more useful if Route 53 responds to queries based on the health of the underlying resources—the HTTP servers, database servers, and other resources that your alias records refer to. For example, suppose the following configuration:


      evaluate target health

    1. For a non-proxy integration, you must set up at least one integration response, and make it the default response, to pass the result returned from the backend to the client. You can choose to pass through the result as-is or to transform the integration response data to the method response data if the two have different formats. For a proxy integration, API Gateway automatically passes the backend output to the client as an HTTP response. You do not set either an integration response or a method response.

      integration vs method response

    1. Set up method response status code The status code of a method response defines a type of response. For example, responses of 200, 400, and 500 indicate successful, client-side error and server-side error responses, respectively.

      method response status code

    1. AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.

    1. You can use organizational units (OUs) to group accounts together to administer as a single unit. This greatly simplifies the management of your accounts.

      AWS Organization Unit

    1. What is AWS Elastic Beanstalk?

      AWS PaaS

    2. Because AWS Elastic Beanstalk performs an in-place update when you update your application versions, your application can become unavailable to users for a short period of time. You can avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment, and then swap CNAMEs of the two environments to redirect traffic to the new version instantly.

      CNAM swap

    1. Using AWS SCT to convert objects (tables, indexes, constraints, functions, and so on) from the source commercial engine to the open-source engine. Using AWS DMS to move data into the appropriate converted objects and keep the target database in complete sync with the source. Doing this takes care of the production workload while the migration is ongoing.

      DMS vs SCT

      data migration service vs schema conversion tool

      DMS source and target db are the same

    1. When an instance is stopped and restarted, the Host affinity setting determines whether it's restarted on the same, or a different, host.

      host affinity setting helps for manage dedicated hosts

    1. Available Internet Connection Theoretical Min. Number of Days to Transfer 100TB at 80% Network Utilization When to Consider AWS Snowball? T3 (44.736Mbps) 269 days 2TB or more 100Mbps 120 days 5TB or more 1000Mbps 12 days 60TB or more

      when snowball

      1000Mbps 12 days 60TB

    1. Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe.
    1. The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users)

      aws resource

    1. AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.
    1. For example, assume that you have a load balancer configuration that you use for most of your stacks. Instead of copying and pasting the same configurations into your templates, you can create a dedicated template for the load balancer. Then, you just use the resource to reference that template from within other templates.

      nested stack

    1. AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud.
    1. Expedited retrieval allows you to quickly access your data when you need to have almost immediate access to your information. This retrieval type can be used for archives up to 250MB. Expedited retrieval usually completes within 1 and 5 minutes.


      3 types of retrieval

      expecited 1~5minutes

    1. TGW coupled with AWS Resource Access Manager will allow you to use a single Transit Gateway across multiple AWS accounts, however, it’s still limited to a single region.

      TGW, cross multi accounts

    2. Direct Connect Gateway – DGW DGW builds upon VGW capabilities adding the ability to connect VPCs in one region to a Direct Connect in another region. CIDR addresses cannot overlap. In addition, traffic will not route from VPC-A to the Direct Connect Gateway and to VPC-B. Traffic will have to route from the VPC-A —> Direct Connect —-> Data Centre Router —-> Direct Connect —> VPC-B.

      besides VGW, connect to another region through direct connect.

    3. Virtual Private Gateway – VGW The introduction of the VGW introduced the ability to let multiple VPCs, in the same region, on the same account, share a Direct Connect. Prior to this, you’d need a Direct Connect Private Virtual Interface (VIF) for each VPC, establishing a 1:1 correlation, which didn’t scale well both in terms of cost and administrative overhead.  VGW became a solution that reduced the expense of requiring new Direct Connect circuits for each VPC as long as both VPCs were in the same region, on the same account. This construct can be used with either Direct Connect or the Site-to-Site VPN.

      VGW, save direct connect fee, by using one to coonect all vpcs in same region

    4. AWS VGW vs DGW vs TGW

    1. In general, bucket owners pay for all Amazon S3 storage and data transfer costs associated with their bucket. A bucket owner, however, can configure a bucket to be a Requester Pays bucket. With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data.

      Request Pays

    1. When CloudFront receives a request, you can use a Lambda function to generate an HTTP response that CloudFront returns directly to the viewer without forwarding the response to the origin. Generating HTTP responses reduces the load on the origin, and typically also reduces latency for the viewer.

      can be helpful when auth

    1. Amazon S3 event notifications are designed to be delivered at least once. Typically, event notifications are delivered in seconds but can sometimes take a minute or longer.

      event notification of s3 might take minutes


      cloud watch does not support s3, but cloud trail does

    1. By default, Amazon Redshift has excellent tools to back up your cluster via snapshot to Amazon Simple Storage Service (Amazon S3). These snapshots can be restored in any AZ in that region or transferred automatically to other regions for disaster recovery. Amazon Redshift can even prioritize data being restored from Amazon S3 based on the queries running against a cluster that is still being restored.

      Redshift is single az

    1. For this setup, do the following: 1.    Create a custom AWS Identity and Access Management (IAM) policy and execution role for your Lambda function. 2.    Create Lambda functions that stop and start your EC2 instances. 3.    Create CloudWatch Events rules that trigger your function on a schedule. For example, you could create a rule to stop your EC2 instances at night, and another to start them again in the morning.
    1. FIFO queues also provide exactly-once processing but have a limited number of transactions per second (TPS):

      standard quere not gurantee exactly one

  29. Apr 2020
    1. One way to put it is this: LSI - allows you to perform a query on a single Hash-Key while using multiple different attributes to "filter" or restrict the query. GSI - allows you to perform queries on multiple Hash-Keys in a table, but costs extra in throughput, as a result.

      Secondary Index LSI vs GDI

    1. Cognito authorizers–Amazon Cognito user pools provide a set of APIs that you can integrate into your application to provide authentication. User pools are intended for mobile or web applications where you handle user registration and sign-in directly in the application.To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS authorizer type, and then configure an API method to use that authorizer. After a user is authenticated against the user pool, they obtain an Open ID Connect token, or OIDC token, formatted in a JSON web token.Users who have signed in to your application will have tokens provided to them by the user pool. Then, your application can use that token to inject information into a header in subsequent API calls that you make against your API Gateway endpoint.The API call succeeds only if the required token is supplied and the supplied token is valid. Otherwise, the client isn't authorized to make the call, because the client did not have credentials that could be authorized.

    2. IAM authorizers–All requests are required to be signed using the AWS Version 4 signing process (also known as SigV4). The process uses your AWS access key and secret key to compute an HMAC signature using SHA-256. You can obtain these keys as an AWS Identity and Access Management (IAM) user or by assuming an IAM role. The key information is added to the Authorization header, and behind the scenes, API Gateway takes that signed request, parses it, and determines whether or not the user who signed the request has the IAM permissions to invoke your API.

    3. Lambda authorizers–A Lambda authorizer is simply a Lambda function that you can write to perform any custom authorization that you need. There are two types of Lambda authorizers: token and request parameter. When a client calls your API, API Gateway verifies whether a Lambda authorizer is configured for the API method. If it is, API Gateway calls the Lambda function.In this call, API Gateway supplies the authorization token (or the request parameters, based on the type of authorizer), and the Lambda function returns a policy that allows or denies the caller’s request.API Gateway also supports an optional policy cache that you can configure for your Lambda authorizer. This feature increases performance by reducing the number of invocations of your Lambda authorizer for previously authorized tokens. And with this cache, you can configure a custom time to live (TTL).To make it easy to get started with this method, you can choose the API Gateway Lambda authorizer blueprint when creating your authorizer function from the Lambda console.

    1. DynamoDB supports two types of secondary indexes: Global secondary index — An index with a partition key and a sort key that can be different from those on the base table. A global secondary index is considered "global" because queries on the index can span all of the data in the base table, across all partitions. A global secondary index is stored in its own partition space away from the base table and scales separately from the base table. Local secondary index — An index that has the same partition key as the base table, but a different sort key. A local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a base table partition that has the same partition key value.
    1. Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn't succeed.
    1. Amazon Lex is a service for building conversational interfaces into any application using voice and text
    1. A company runs a memory-intensive analytics application using on-demand Amazon EC2 C5 compute optimized instance. The application is used continuously and application demand doubles during working hours. The application currently scales based on CPU usage. When scaling in occurs, a lifecycle hook is used because the instance requires 4 minutes to clean the application state before terminating.Because users reported poor performance during working hours, scheduled scaling actions were implemented so additional instances would be added during working hours. The Solutions Architect has been asked to reduce the cost of the application.Which solution is MOST cost-effective?

      should be A here, cause C5 is 40% cheaper than R5

    1. When a user in an AWS account creates a blockchain network on Amazon Managed Blockchain, they also create the first member in the network. This first member has no peer nodes associated with it until you create them. After you create the network and the first member, you can use that member to create an invitation proposal for other members in the same AWS account or in other AWS accounts. Any member can create an invitation proposal.

      about members of blockchain

    1. AWS Step Functions is a fully managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Instead of writing a Decider program, you define state machines in JSON. AWS customers should consider using Step Functions for new applications. If Step Functions does not fit your needs, then you should consider Amazon Simple Workflow (SWF)
    2. Workers are programs that interact with Amazon SWF to get tasks, process received tasks, and return the results. The decider is a program that controls the coordination of tasks,

      SWF worker and decider

    1. SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud.

    1. SNS is a distributed publish-subscribe system. Messages are pushed to subscribers as and when they are sent by publishers to SNS. SQS is distributed queuing system. Messages are NOT pushed to receivers. Receivers have to poll or pull messages from SQS.
    1. Amazon SimpleDB passes on to you the financial benefits of Amazon’s scale. You pay only for resources you actually consume. For Amazon SimpleDB, this means data store reads and writes are charged by compute resources consumed by each operation, and you aren’t billed for compute resources when you aren’t actively using them (i.e. making requests).
    1. While SimpleDB has scaling limitations, it may be a good fit for smaller workloads that require query flexibility. Amazon SimpleDB automatically indexes all item attributes and thus supports query flexibility at the cost of performance and scale.

      Simple DB vs DynamoDB

    1. An elastic network interface (referred to as a network interface in this documentation) is a logical networking component in a VPC that represents a virtual network card.
    1. Identity-based policies are attached to an IAM user, group, or role. These policies let you specify what that identity can do (its permissions). For example, you can attach the policy to the IAM user named John, stating that he is allowed to perform the Amazon EC2 RunInstances action. The policy could further state that John is allowed to get items from an Amazon DynamoDB table named MyCompany. You can also allow John to manage his own IAM security credentials. Identity-based policies can be managed or inline. Resource-based policies are attached to a resource. For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, and AWS Key Management Service encryption keys. For a list of services that support resource-based policies, see AWS Services That Work with IAM.

      Identity-Based Policies and Resource-Based Policies

    1. gp2 is the default EBS volume type for Amazon EC2 instances. These volumes are backed by solid-state drives (SSDs) and are suitable for a broad range of transactional workloads,


    2. st1 is backed by hard disk drives (HDDs) and is ideal for frequently accessed

      EBS st1

    1. Intrusion detection and intrusion prevention systems Monitor events in your network for security threats and stop threats once detected.


    1. Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.

      query data from s3

    1. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. Systems Manager simplifies resource and application management, shortens the time to detect and resolve operational problems, and makes it easy to operate and manage your infrastructure securely at scale.
    1. AWS Trusted Advisor is an application that draws upon best practices learned from AWS’ aggregated operational history of serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance, or closing security gaps. 
    1. Your client’s CloudWatch Logs configuration receives logs and data from on-premises monitoring systems and agents installed in operating systems. A new team wants to use CloudWatch to also monitor Amazon EC2 instance performance and state changes of EC2 instances, such as instance creation, instance power-off, and instance termination. This solution should also be able to notify the team of any state changes for troubleshooting.
    1. Chef and Puppet Puppet is a powerful enterprise-grade configuration management tool. Both Chef and Puppet help development and operations teams manage applications and infrastructure. However they have important differences you should understand when evaluating which one is right for you.

      aws chef puppet




    1. In addition to strings, Redis supports lists, sets, sorted sets, hashes, bit arrays, and hyperloglogs. Applications can use these more advanced data structures to support a variety of use cases. For example, you can use Redis Sorted Sets to easily implement a game leaderboard that keeps a list of players sorted by their rank.

      redis support more data structure memcached is k-v

      memCached is not highly available, beause lack of replication support like redis

    1. Events can self-trigger based on a schedule; alarms don't do this Alarms invoke actions only for sustained changes Alarms watch a single metric and respond to changes in that metric; events can respond to actions (such as a lambda being created or some other change in your AWS environment) Alarms can be added to CloudWatch dashboards, but events cannot Events are processed by targets, with many more options than the actions an alarm can trigger

      Event vs Alarm

    1. SMOKE TESTING is a type of software testing that determines whether the deployed build is stable or not.

      stable or not

    1. Config: understand and monitor your AWS resources. OpsWorks: configure your servers with Chef or Puppet. Very little overlap between the two.
    1. Validating CloudTrail Log File Integrity PDF Kindle RSS To determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it, you can use CloudTrail log fil