192 Matching Annotations
  1. Feb 2024
    1. At a minimum, each ADR should define the context of the decision, the decision itself, and the consequences of the decision for the project and its deliverables

      ADR sections from the example: * Title * Status * Date * Context * Decision * Consequences * Compliance * Notes

  2. Jan 2024
    1. LocalStack is a cloud service emulator that runs AWS services solely on your laptop without connecting to a remote cloud provider .

      https://www.localstack.cloud/

  3. Nov 2023
    1. It should be noted that in France, regulations do not allow this market-based approach when reporting company level CO2e emissions : “The assessment of the impact of electricity consumption in the GHG emissions report is carried out on the basis of the average emission factor of the electrical network (…) The use of any other factor is prohibited. There is therefore no discrimination by [electricity] supplier to be established when collecting the data.” (Regulatory method V5-BEGES decree).

      Companies are barred from using market based approaches for reporting?

      How does it work for Amazon then?

  4. Sep 2023
    1. VPC Subnets belong to a Network ACL that determines if traffic is allowed / denied entry and exit to the ENTIRE subnet
  5. Aug 2023
    1. aws infra changes also emit events with payload, we can catch and enrich these events to make decisions in downstream services

  6. Jun 2023
  7. May 2023
    1. Amazon has a new set of services that include an LLM called Titan and corresponsing cloud/compute services, to roll your own chatbots etc.

  8. Mar 2023
    1. You can freely replace SageMaker services with other components as your project grows and potentially outgrows SageMaker.

  9. Jan 2023
  10. Oct 2022
    1. transferring data across Availability zones within the same region is also a good way to save money

      1) data transferring tip on AWS

    2. When you use a private IP address, you are charged less when compared to a public IP address or Elastic IP address.

      2) data transferring tip on AWS

    3. But when you transfer data from one Amazon region to another, AWS charges you for that. It depends on the AWS region you are and this is the real deciding factor. For example, if you are in the US West(Oregon) region, you have to shell out $0.080/GB whereas in Asia Pacific (Seoul) region it bumps up to $0.135/GB.

      Transferring data in AWS within separate regions is quite costly

    4. When you transfer data between Amazon EC2, Amazon Redshift, Amazon RDS, Amazon Network Interfaces, and Amazon Elasticache, you have to pay zero charges if they are within the same Availability Zone.

      Transferring data in AWS within the same AZ is free

    5. When you transfer data from the internet to AWS, it is free of charge. AWS services like EC2 instances, S3 storage, or RDS instances, when you transfer data from the Internet into these you don’t have to pay any charge for it. However, if you transfer data using Elastic IPv4 address or peered VPC using an IPv6 address you will be charged $0.01/gb whenever you transfer data into an EC2 instance. The real catch is when you transfer data out of any of the AWS services. This is where AWS charges you money depending on the area you have chosen and the amount of data you are transferring. Some regions have higher charges than others.

      Data transfer costs on AWS

    1. we made sure to implement fail safes at each stage of the migration to make sure we could fall back if something were to go wrong. It’s also why we tested on a small scale before proceeding with the rest of the migration.

      While planning a big migration, make sure to have a fall back plan

    2. We mirrored PostgreSQL shards storing cached_urls tables in CassandraWe switched service.prerender.io to Cloudflare load balancer to allow dynamic traffic distributionWe set up new EU private-recache serversWe keep performing stress tests to solve any performance issues

      Steps of phase 3 migration

    3. “The true hidden price for AWS is coming from the traffic cost, they sell a reasonably priced storage, and it’s even free to upload it. But when you get it out, you pay an enormous cost.

      AWS may be reasonably price, but moving data out will cost a lot (e.g. $0.080/GB in the US West, or $0.135/GB in the Asia Pacific)!

    4. In the last four weeks, we moved most of the cache workload from AWS S3 to our own Cassandra cluster.

      Moving from AWS s3 to an own Cassandra cluster

    5. After testing whether Prerender pages could be cached in both S3 and minio, we slowly diverted traffic away from AWS S3 and towards minio.

      Moving from AWS S3 towards minio

    6. Phase 1 mostly involved setting up the bare metal servers and testing the migration on a small and more manageable setting before scaling. This phase required minimal software adaptation, which we decided to run on KVM virtualization on Linux.

      Migration from AWS to on-prem started by: - setting bare metal servers - testing - adapting software to run on KVM virtualization on Linux

    7. The solution? Migrate the cached pages and traffic onto Prerender’s own internal servers and cut our reliance on AWS as quickly as possible.

      When the Prerender team moved from AWS to on-prem, they have cut the cost from $1,000,000 to $200,000, for the data storage and traffic cost

  11. Aug 2022
    1. A data lake is different, because it stores relational data from line of business applications, and non-relational data from mobile apps, IoT devices, and social media.

      A data lake vs a Data Warehouse (

    2. The data structure, and schema are defined in advance to optimize for fast SQL queries, where the results are typically used for operational reporting and analysis. Data is cleaned, enriched, and transformed so it can act as the “single source of truth”

      Data warehouse

  12. Jun 2022
  13. Nov 2021
    1. We implemented a bash script to be installed in the master node of the EMR cluster, and the script is scheduled to run every 5 minutes. The script monitors the clusters and sends a CUSTOM metric EMR-INUSE (0=inactive; 1=active) to CloudWatch every 5 minutes. If CloudWatch receives 0 (inactive) for some predefined set of data points, it triggers an alarm, which in turn executes an AWS Lambda function that terminates the cluster.

      Solution to terminate EMR cluster; however, right now EMR supports auto-termination policy out of the box

  14. Oct 2021
    1. So, while DELETE operations are free, LIST operations (to get a list of objects) are not free (~$.005 per 1000 requests, varying a bit by region).

      Deleting buckets on S3 is not free. If you use either Web Console or AWS CLI, it will execute the LIST call per 1000 objects

    1. few battle-hardened options, for instance: Airflow, a popular open-source workflow orchestrator; Argo, a newer orchestrator that runs natively on Kubernetes, and managed solutions such as Google Cloud Composer and AWS Step Functions.

      Current top orchestrators:

      • Airflow
      • Argo
      • Google Cloud Composer
      • AWS Step Functions
  15. Sep 2021
  16. Aug 2021
  17. Jun 2021
  18. Apr 2021
  19. Mar 2021
    1. Werner Vogels, Amazon CTO, notes that one of the lessons we have learned at Amazon is to expect the unexpected. He reminds us that failures are a given, and as a consequence it’s desirable to build systems that embrace failure as a natural occurrence. Coding around these failures is important, but undifferentiated, work that improves the integrity of the solution being delivered. However, it takes time away from investing in differentiating code.

      This is an annotation I made.

    2. When asked to define the role of the teacher, for example, Reggio educators do not begin in the way typical to

      This is an annotation I made.

    3. This is an annotation I made.

    4. This is an annotation I made.

    5. This is an annotation I made.

    1. Another application that demands extreme reliability is the configuration of foundational components from AWS, such as Network Load Balancers. When a customer makes a change to their Network Load Balancer, such as adding a new instance or container as a target, it is often critical and urgent. The customer might be experiencing a flash crowd and needs to add capacity quickly. Under the hood, Network Load Balancers run on AWS Hyperplane, an internal service that is embedded in the Amazon Elastic Compute Cloud (EC2) network. AWS Hyperplane could handle configuration changes by using a workflow. So, whenever a customer makes a change, the change is turned into an event and inserted into a workflow that pushes that change out to all of the AWS Hyperplane nodes that need it. They can then ingest the change.

      This article clearly describes the functionality about aws elb

  20. Feb 2021
  21. Jan 2021
    1. Zappos created models to predict customer apparel sizes, which are cached and exposed at runtime via microservices for use in recommendations.

      There is another company named Virtusize who is doing the same thing like size predicting or recommendation

    1. Note: If your DNS does not allow you to add “@” as the hostname, please try leaving this field blank when you enter the ProtonMail verification information.

      If you're using AWS Route53, the console will silently accept the @ for the host, but WILL BE INVALID. You will need to follow this guidance to complete DNS configuration for ProtonMail.

  22. Nov 2020
    1. The details of what goes into a policy vary for each service, depending on what actions the service makes available, what types of resources it contains, and so on.

      This means that some kinds of validation cannot be done on write. For example, I've been able to write Resource values that contain invalid characters.

  23. Sep 2020
  24. Aug 2020
  25. Jun 2020
    1. Serverless may have a confusing name and might have people believe that it is “server-less” but it is still an impressive architect with various benefits. From a business’ perspective, the best advantage of going serverless is reduced time-to-market. Others being, less operational costs, no infrastructural management and efficiency.

      https://ateam-texas.com/serverless-architecture-advantage-of-going-serverless-for-your-next-app-development/

    1. The best all-around performer is AWS CloudFront, followed closely by GitHub Pages. Not only do they have the fastest response times (median), they’re also the most consistent. They are, however, closely followed by Google Cloud Storage. Interestingly, there is very little difference between a regional and multi-regional bucket. The only reason to pick a multi-regional bucket would be the additional uptime guarantee. Cloudflare didn’t perform as well I would’ve expected.

      Results of static webhosting benchmark (2020 May):

      1. AWS CloudFront
      2. GitHub Pages
      3. Google Cloud Storage
  26. May 2020
    1. Amazon Machine Learning Deprecated. Use SageMaker instead.

      Instead of Amazon Machine Learning use Amazon SageMaker

    1. My friends ask me if I think Google Cloud will catch up to its rivals. Not only do I think so — I’m positive five years down the road it will surpass them.

      GCP more popular than AWS in 2025?

    2. So if GCP is so much better, why so many more people use AWS?

      Why so many people use AWS:

      • they were first
      • aggressive expansion of product line
      • following the crows
      • fear of not getting a job based on GCP
      • fear that GCP may be abandoned by Google
    3. As I mentioned I think that AWS certainly offers a lot more features, configuration options and products than GCP does, and you may benefit from some of them. Also AWS releases products at a much faster speed.You can certainly do more with AWS, there is no contest here. If for example you need a truck with a server inside or a computer sent over to your office so you can dump your data inside and return it to Amazon, then AWS is for you. AWS also has more flexibility in terms of location of your data centres.

      Advantages of AWS over GCP:

      • a lot more features (but are they necessary for you?)
      • a lot more configuration options
      • a lot more products
      • releases products at a much faster speed
      • you can do simply more with AWS
      • offers AWS Snowmobile
      • more flexibility in terms of your data centres
    4. Both AWS and GCP are very secure and you will be okay as long as are not careless in your design. However GCP for me has an edge in the sense that everything is encrypted by default.

      Encryption is set to default in GCP

    5. I felt that performance was almost always better in GCP, for example copying from instances to buckets in GCP is INSANELY fast

      Performance wise GCP also seems to outbeat AWS

    6. AWS charges substantially more for their services than GCP does, but most people ignore the real high cost of using AWS, which is; expertise, time and manpower.

      AWS is more costly, requires more time and manpower over GCP

    7. GCP provides a smaller set of core primitives that are global and work well for lots of use cases. Pub/Sub is probably the best example I have for this. In AWS you have SQS, SNS, Amazon MQ, Kinesis Data Streams, Kinesis Data Firehose, DynamoDB Streams, and maybe another queueing service by the time you read this post. 2019 Update: Amazon has now released another streaming service: Amazon Managed Streaming Kafka.

      Pub/Sub of GCP might be enough to replace most (all?) of the following Amazon products: SQS, SNS, Amazon MQ, Kinesis Data Streams, Kinesis Data Firehose, DynamoDB Streams, Amazon Managed Streaming Kafka

    8. At the time of writing this, there are 169 AWS products compared to 90 in GCP.

      AWS has more products than GCP but that's not necessarily good since some even nearly duplicate

    9. Spinning an EKS cluster gives you essentially a brick. You have to spin your own nodes on the side and make sure they connect with the master, which a lot of work for you to do on top of the promise of “managed”

      Managing Kubernetes in AWS (EKS) also isn't as effective as in GCP or GKE

    10. You can forgive the documentation in AWS being a nightmare to navigate for being a mere reflection of the confusing mess that is trying to describe. Whenever you are trying to solve a simple problem far too often you end up drowning in reference pages, the experience is like asking for a glass of water and being hosed down with a fire hydrant.

      Great documentation is contextual, not referential (like AWS's)

    11. Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon’s retail site. He hired Larry Tesler, Apple’s Chief Scientist and probably the very most famous and respected human-computer interaction expert in the entire world, and then ignored every goddamn thing Larry said for three years until Larry finally — wisely — left the company. Larry would do these big usability studies and demonstrate beyond any shred of doubt that nobody can understand that frigging website, but Bezos just couldn’t let go of those pixels, all those millions of semantics-packed pixels on the landing page. They were like millions of his own precious children. So they’re all still there, and Larry is not.

      Case why AWS doesn't look as it supposed to be

    12. The AWS interface looks like it was designed by a lonesome alien living in an asteroid who once saw a documentary about humans clicking with a mouse. It is confusing, counterintuitive, messy and extremely overcrowded.

      :)

    13. After you login with your token you then need to create a script to give you a 12 hour session, and you need to do this every day, because there is no way to extend this.

      One of the complications when we want to use AWS CLI with 2FA (not a case of GCP)

    14. In GCP you have one master account/project that you can use to manage the rest of your projects, you log in with your company google account and then you can set permissions to any project however you want.

      Setting up account permission to the projects in GCP is far better than in AWS

    15. It’s not that AWS is harder to use than GCP, it’s that it is needlessly hard; a disjointed, sprawl of infrastructure primitives with poor cohesion between them.

      AWS management isn't as straightforward as the one of GCP

    1. A portfolio is a collection of products, together with configuration information. Portfolios help manage product configuration, and who can use specific products and how they can use them. With AWS Service Catalog, you can create a customized portfolio for each type of user in your organization and selectively grant access to the appropriate portfolio.
    1. Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.

      firehose is different

      • es
      • s3
      • redshift
    1. Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk,

    1. DynamoDB Streams enables solutions such as these, and many others. DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours.

      record db item changes

    1. AWS OpsWorks Stacks uses Chef cookbooks to handle tasks such as installing and configuring packages and deploying apps.
    1. Your Amazon Athena query performance improves if you convert your data into open source columnar formats, such as Apache Parquet

      s3 perfomance use columnar formats

    1. Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS.
    1. Amazon AppStream 2.0 is a fully managed application streaming service. You centrally manage your desktop applications on AppStream 2.0 and securely deliver them to any computer.

      fro streaming apps

    1. A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
    1. Endpoint policies are currently supported by CodeBuild, CodeCommit, ELB API, SQS, SNS, CloudWatch Logs, API Gateway, SageMaker notebooks, SageMaker API, SageMaker Runtime, Cloudwatch Events and Kinesis Firehose.
    1. Using VPC endpoint policies A VPC endpoint policy is an IAM resource policy that you attach to an endpoint when you create or modify the endpoint. If you do not attach a policy when you create an endpoint, we attach a default policy for you that allows full access to the service. If a service does not support endpoint policies, the endpoint allows full access to the service. An endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies). It is a separate policy for controlling access from the endpoint to the specified service.
    1. An interface VPC endpoint (interface endpoint) enables you to connect to services powered by AWS PrivateLink.

      let you connect to aws service in private vpc

    1. You can associate a health check with an alias record instead of or in addition to setting the value of Evaluate Target Health to Yes. However, it's generally more useful if Route 53 responds to queries based on the health of the underlying resources—the HTTP servers, database servers, and other resources that your alias records refer to. For example, suppose the following configuration:

      aws

      evaluate target health

    1. For a non-proxy integration, you must set up at least one integration response, and make it the default response, to pass the result returned from the backend to the client. You can choose to pass through the result as-is or to transform the integration response data to the method response data if the two have different formats. For a proxy integration, API Gateway automatically passes the backend output to the client as an HTTP response. You do not set either an integration response or a method response.

      integration vs method response

    1. Set up method response status code The status code of a method response defines a type of response. For example, responses of 200, 400, and 500 indicate successful, client-side error and server-side error responses, respectively.

      method response status code

    1. AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.

    1. You can use organizational units (OUs) to group accounts together to administer as a single unit. This greatly simplifies the management of your accounts.

      AWS Organization Unit

    1. What is AWS Elastic Beanstalk?

      AWS PaaS

    2. Because AWS Elastic Beanstalk performs an in-place update when you update your application versions, your application can become unavailable to users for a short period of time. You can avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment, and then swap CNAMEs of the two environments to redirect traffic to the new version instantly.

      CNAM swap

    1. Using AWS SCT to convert objects (tables, indexes, constraints, functions, and so on) from the source commercial engine to the open-source engine. Using AWS DMS to move data into the appropriate converted objects and keep the target database in complete sync with the source. Doing this takes care of the production workload while the migration is ongoing.

      DMS vs SCT

      data migration service vs schema conversion tool

      DMS source and target db are the same

    1. When an instance is stopped and restarted, the Host affinity setting determines whether it's restarted on the same, or a different, host.

      host affinity setting helps for manage dedicated hosts

    1. Available Internet Connection Theoretical Min. Number of Days to Transfer 100TB at 80% Network Utilization When to Consider AWS Snowball? T3 (44.736Mbps) 269 days 2TB or more 100Mbps 120 days 5TB or more 1000Mbps 12 days 60TB or more

      when snowball

      1000Mbps 12 days 60TB

    1. Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe.
    1. The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users)

      aws resource

    1. AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.
    1. For example, assume that you have a load balancer configuration that you use for most of your stacks. Instead of copying and pasting the same configurations into your templates, you can create a dedicated template for the load balancer. Then, you just use the resource to reference that template from within other templates.

      nested stack

    1. AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud.
    1. Expedited retrieval allows you to quickly access your data when you need to have almost immediate access to your information. This retrieval type can be used for archives up to 250MB. Expedited retrieval usually completes within 1 and 5 minutes.

      https://aws.amazon.com/glacier/faqs/

      3 types of retrieval

      expecited 1~5minutes

    1. TGW coupled with AWS Resource Access Manager will allow you to use a single Transit Gateway across multiple AWS accounts, however, it’s still limited to a single region.

      TGW, cross multi accounts

    2. Direct Connect Gateway – DGW DGW builds upon VGW capabilities adding the ability to connect VPCs in one region to a Direct Connect in another region. CIDR addresses cannot overlap. In addition, traffic will not route from VPC-A to the Direct Connect Gateway and to VPC-B. Traffic will have to route from the VPC-A —> Direct Connect —-> Data Centre Router —-> Direct Connect —> VPC-B.

      besides VGW, connect to another region through direct connect.

    3. Virtual Private Gateway – VGW The introduction of the VGW introduced the ability to let multiple VPCs, in the same region, on the same account, share a Direct Connect. Prior to this, you’d need a Direct Connect Private Virtual Interface (VIF) for each VPC, establishing a 1:1 correlation, which didn’t scale well both in terms of cost and administrative overhead.  VGW became a solution that reduced the expense of requiring new Direct Connect circuits for each VPC as long as both VPCs were in the same region, on the same account. This construct can be used with either Direct Connect or the Site-to-Site VPN.

      VGW, save direct connect fee, by using one to coonect all vpcs in same region

    4. AWS VGW vs DGW vs TGW

    1. In general, bucket owners pay for all Amazon S3 storage and data transfer costs associated with their bucket. A bucket owner, however, can configure a bucket to be a Requester Pays bucket. With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data.

      Request Pays

    1. When CloudFront receives a request, you can use a Lambda function to generate an HTTP response that CloudFront returns directly to the viewer without forwarding the response to the origin. Generating HTTP responses reduces the load on the origin, and typically also reduces latency for the viewer.

      can be helpful when auth

    1. Amazon S3 event notifications are designed to be delivered at least once. Typically, event notifications are delivered in seconds but can sometimes take a minute or longer.

      event notification of s3 might take minutes

      BTW,

      cloud watch does not support s3, but cloud trail does

    1. By default, Amazon Redshift has excellent tools to back up your cluster via snapshot to Amazon Simple Storage Service (Amazon S3). These snapshots can be restored in any AZ in that region or transferred automatically to other regions for disaster recovery. Amazon Redshift can even prioritize data being restored from Amazon S3 based on the queries running against a cluster that is still being restored.

      Redshift is single az

    1. For this setup, do the following: 1.    Create a custom AWS Identity and Access Management (IAM) policy and execution role for your Lambda function. 2.    Create Lambda functions that stop and start your EC2 instances. 3.    Create CloudWatch Events rules that trigger your function on a schedule. For example, you could create a rule to stop your EC2 instances at night, and another to start them again in the morning.
    1. FIFO queues also provide exactly-once processing but have a limited number of transactions per second (TPS):

      standard quere not gurantee exactly one

  27. Apr 2020
    1. One way to put it is this: LSI - allows you to perform a query on a single Hash-Key while using multiple different attributes to "filter" or restrict the query. GSI - allows you to perform queries on multiple Hash-Keys in a table, but costs extra in throughput, as a result.

      Secondary Index LSI vs GDI

    1. Cognito authorizers–Amazon Cognito user pools provide a set of APIs that you can integrate into your application to provide authentication. User pools are intended for mobile or web applications where you handle user registration and sign-in directly in the application.To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS authorizer type, and then configure an API method to use that authorizer. After a user is authenticated against the user pool, they obtain an Open ID Connect token, or OIDC token, formatted in a JSON web token.Users who have signed in to your application will have tokens provided to them by the user pool. Then, your application can use that token to inject information into a header in subsequent API calls that you make against your API Gateway endpoint.The API call succeeds only if the required token is supplied and the supplied token is valid. Otherwise, the client isn't authorized to make the call, because the client did not have credentials that could be authorized.

    2. IAM authorizers–All requests are required to be signed using the AWS Version 4 signing process (also known as SigV4). The process uses your AWS access key and secret key to compute an HMAC signature using SHA-256. You can obtain these keys as an AWS Identity and Access Management (IAM) user or by assuming an IAM role. The key information is added to the Authorization header, and behind the scenes, API Gateway takes that signed request, parses it, and determines whether or not the user who signed the request has the IAM permissions to invoke your API.

    3. Lambda authorizers–A Lambda authorizer is simply a Lambda function that you can write to perform any custom authorization that you need. There are two types of Lambda authorizers: token and request parameter. When a client calls your API, API Gateway verifies whether a Lambda authorizer is configured for the API method. If it is, API Gateway calls the Lambda function.In this call, API Gateway supplies the authorization token (or the request parameters, based on the type of authorizer), and the Lambda function returns a policy that allows or denies the caller’s request.API Gateway also supports an optional policy cache that you can configure for your Lambda authorizer. This feature increases performance by reducing the number of invocations of your Lambda authorizer for previously authorized tokens. And with this cache, you can configure a custom time to live (TTL).To make it easy to get started with this method, you can choose the API Gateway Lambda authorizer blueprint when creating your authorizer function from the Lambda console.

    1. DynamoDB supports two types of secondary indexes: Global secondary index — An index with a partition key and a sort key that can be different from those on the base table. A global secondary index is considered "global" because queries on the index can span all of the data in the base table, across all partitions. A global secondary index is stored in its own partition space away from the base table and scales separately from the base table. Local secondary index — An index that has the same partition key as the base table, but a different sort key. A local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a base table partition that has the same partition key value.
    1. Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn't succeed.
    1. Amazon Lex is a service for building conversational interfaces into any application using voice and text
    1. A company runs a memory-intensive analytics application using on-demand Amazon EC2 C5 compute optimized instance. The application is used continuously and application demand doubles during working hours. The application currently scales based on CPU usage. When scaling in occurs, a lifecycle hook is used because the instance requires 4 minutes to clean the application state before terminating.Because users reported poor performance during working hours, scheduled scaling actions were implemented so additional instances would be added during working hours. The Solutions Architect has been asked to reduce the cost of the application.Which solution is MOST cost-effective?

      should be A here, cause C5 is 40% cheaper than R5

    1. When a user in an AWS account creates a blockchain network on Amazon Managed Blockchain, they also create the first member in the network. This first member has no peer nodes associated with it until you create them. After you create the network and the first member, you can use that member to create an invitation proposal for other members in the same AWS account or in other AWS accounts. Any member can create an invitation proposal.

      about members of blockchain

    1. AWS Step Functions is a fully managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Instead of writing a Decider program, you define state machines in JSON. AWS customers should consider using Step Functions for new applications. If Step Functions does not fit your needs, then you should consider Amazon Simple Workflow (SWF)
    2. Workers are programs that interact with Amazon SWF to get tasks, process received tasks, and return the results. The decider is a program that controls the coordination of tasks,

      SWF worker and decider

    1. SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud.

    1. SNS is a distributed publish-subscribe system. Messages are pushed to subscribers as and when they are sent by publishers to SNS. SQS is distributed queuing system. Messages are NOT pushed to receivers. Receivers have to poll or pull messages from SQS.
    1. Amazon SimpleDB passes on to you the financial benefits of Amazon’s scale. You pay only for resources you actually consume. For Amazon SimpleDB, this means data store reads and writes are charged by compute resources consumed by each operation, and you aren’t billed for compute resources when you aren’t actively using them (i.e. making requests).
    1. While SimpleDB has scaling limitations, it may be a good fit for smaller workloads that require query flexibility. Amazon SimpleDB automatically indexes all item attributes and thus supports query flexibility at the cost of performance and scale.

      Simple DB vs DynamoDB

    1. An elastic network interface (referred to as a network interface in this documentation) is a logical networking component in a VPC that represents a virtual network card.
    1. Identity-based policies are attached to an IAM user, group, or role. These policies let you specify what that identity can do (its permissions). For example, you can attach the policy to the IAM user named John, stating that he is allowed to perform the Amazon EC2 RunInstances action. The policy could further state that John is allowed to get items from an Amazon DynamoDB table named MyCompany. You can also allow John to manage his own IAM security credentials. Identity-based policies can be managed or inline. Resource-based policies are attached to a resource. For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, and AWS Key Management Service encryption keys. For a list of services that support resource-based policies, see AWS Services That Work with IAM.

      Identity-Based Policies and Resource-Based Policies

    1. gp2 is the default EBS volume type for Amazon EC2 instances. These volumes are backed by solid-state drives (SSDs) and are suitable for a broad range of transactional workloads,

      gp2

    2. st1 is backed by hard disk drives (HDDs) and is ideal for frequently accessed

      EBS st1

    1. Intrusion detection and intrusion prevention systems Monitor events in your network for security threats and stop threats once detected.

      IDS/IPS

    1. Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.

      query data from s3

    1. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. Systems Manager simplifies resource and application management, shortens the time to detect and resolve operational problems, and makes it easy to operate and manage your infrastructure securely at scale.
    1. AWS Trusted Advisor is an application that draws upon best practices learned from AWS’ aggregated operational history of serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance, or closing security gaps. 
    1. Your client’s CloudWatch Logs configuration receives logs and data from on-premises monitoring systems and agents installed in operating systems. A new team wants to use CloudWatch to also monitor Amazon EC2 instance performance and state changes of EC2 instances, such as instance creation, instance power-off, and instance termination. This solution should also be able to notify the team of any state changes for troubleshooting.
    1. Chef and Puppet Puppet is a powerful enterprise-grade configuration management tool. Both Chef and Puppet help development and operations teams manage applications and infrastructure. However they have important differences you should understand when evaluating which one is right for you.

      aws chef puppet

    Tags

    Annotators

    URL

    1. In addition to strings, Redis supports lists, sets, sorted sets, hashes, bit arrays, and hyperloglogs. Applications can use these more advanced data structures to support a variety of use cases. For example, you can use Redis Sorted Sets to easily implement a game leaderboard that keeps a list of players sorted by their rank.

      redis support more data structure memcached is k-v

      memCached is not highly available, beause lack of replication support like redis

    1. Events can self-trigger based on a schedule; alarms don't do this Alarms invoke actions only for sustained changes Alarms watch a single metric and respond to changes in that metric; events can respond to actions (such as a lambda being created or some other change in your AWS environment) Alarms can be added to CloudWatch dashboards, but events cannot Events are processed by targets, with many more options than the actions an alarm can trigger

      Event vs Alarm

    1. SMOKE TESTING is a type of software testing that determines whether the deployed build is stable or not.

      stable or not

    1. Config: understand and monitor your AWS resources. OpsWorks: configure your servers with Chef or Puppet. Very little overlap between the two.
    1. Validating CloudTrail Log File Integrity PDF Kindle RSS To determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it, you can use CloudTrail log file integrity validation. This feature is built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection. You can use the AWS CLI to validate the files in the location where CloudTrail delivered them.

      use this help you to detect a potential secuirty issue, some one modify the logs.

      avoid tampering

    1. AWS CloudTrail is mainly concerned with “Who did what on AWS?” and the API calls to the service or resource.AWS CloudWatch is mainly concerned with “What is happening on AWS?” and logging all the events for a particular service or application.

      very good and short

  28. Mar 2020
    1. AWS offers instances with Terabytes of RAM. In this case you still have to manage cloud data buckets, wait for data transfer from bucket to instance every time the instance starts, handle compliance issues that come with putting data on the cloud, and deal with all the inconvenience that come with working on a remote machine. Not to mention the costs, which although start low, tend to pile up as time goes on.

      AWS as a solution to analyse data too big for RAM (like 30-50 GB range). In this case, it's still uncomfortable:

      • managing cloud data buckets
      • waiting for data transfer from bucket to instance every time the instance starts
      • handling compliance issues coming by putting data on the cloud
      • dealing with remote machines
      • costs
  29. Feb 2020
  30. Dec 2019
    1. When building APIs using AWS Lambda, one execution of a Lambda function can serve a single HTTP request
  31. Oct 2019
    1. The X-Forwarded-Proto request header helps you identify the protocol (HTTP or HTTPS) that a client used to connect to your load balancer. Your server access logs contain only the protocol used between the server and the load balancer; they contain no information about the protocol used between the client and the load balancer.

      The load balancer may talk to the server via http so using $scheme in nginx when there's an AWS load balancer in front may lead to the $scheme being unexpectedly http instead of https.

      http {
          map $http_x_forwarded_proto $original_scheme {
            "" $scheme;
            default $http_x_forwarded_proto;
          }
      }
      
  32. May 2019
    1. When designing the addressing plan for an application, the primary consideration is to keep the CIDR blocks used for creating subnets within a single zone as contiguous as possible
    1. The CIDR block must not be the same or larger than the CIDR range of a route in any of the VPC route tables.
    2. You have a limit on the number of CIDR blocks you can associate with a VPC and the number of routes you can add to a route table. You cannot associate a CIDR block if this results in you exceeding your limits.
      • IPv4 CIDR blocks per VPC 5 This limit is made up of your primary CIDR block plus 4 secondary CIDR blocks.

      • Route tables per VPC

      200

      This limit includes the main route table.

      • Routes per route table (non-propagated routes)

      50

      You can increase this limit up to a maximum of 1000; however, network performance might be impacted. This limit is enforced separately for IPv4 routes and IPv6 routes.

      If you have more than 125 routes, we recommend that you paginate calls to describe your route tables for better performance.

    3. You cannot increase or decrease the size of an existing CIDR block.
    4. The allowed block size is between a /28 netmask and /16 netmask.
    5. Adding IPv4 CIDR Blocks to a VPC

      Expanding a VPC IPv4 CIDR block

    1. The permissible size of the block ranges between /16 netmask and a /28 netmask.

      Permissible AWS CIDR block range for AWS VPC

    1. When creating VPCs and VSwitches, you have to specify the private IP address range for the VPC in the form of a Classless Inter-Domain Routing (CIDR) block. Private IP address range of VPC Use 192.168.0.0/16, 172.16.0.0/12, and 10.0.0.0/8 or their subsets as the private IP address range for your VPC. Note the following when planning the private IP address range of VPC: If you have only one VPC and it does not have to communicate with a local data center, you are free to use any of the preceding IP address ranges or their subnets. If you have multiple VPCs, or you want to build a hybrid cloud composed of one or more VPCs and local data centers, we recommend that you use a subset of these standard IP address ranges as the IP address range for your VPC and make sure that the netmask is no larger than /16. You also need to consider whether the classic network is used when selecting a VPC CIDR block. If you plan to connect ECS instances in a classic network with a VPC, we recommend that you do not use the IP address range 10.0.0.0/8, which is also used by the classic network.

      VPC CIDR / IP Addressing plan

    1. def trigger_state_machines(self):

      Get the state machines arn mapping ( {what the state machine is for} : {state machineARN} ) in the environment variables of LandingZoneStateMachineTriggerLambda function

  33. Mar 2019
    1. Repositorio NPM privado grátis com Verdaccio e AWS

      Excelente para você entender, na prática, sobre Cloud Deployment (um de nossos importantes subtópicos!). Além disso, vai sair da palestra com mais ferramentas para seu cinto de utilidades!

  34. Feb 2019
  35. Dec 2018
    1. Amazon isn’t just an online retailer. It’s infrastructure.

      Another point to make is how Amazon's "other" business, Amazon Web Services (AWS) provides a wide array of widely used web infrastructure. AWS commercial infrastructure (among others) increasingly provides the digital infrastructure used by both public and private systems.

  36. Nov 2018
  37. Jul 2018
  38. May 2018
  39. Jan 2018
  40. Nov 2017
    1. Lambda@Edge lets you run Lambda functions at AWS Regions and Amazon CloudFront edge locations in response to CloudFront events

      Extremely happy to see such an amazing opportunity which I think will help create fined grain API's which are fast and can leverage Caching strategies which will be cheap.

  41. Oct 2017
  42. Sep 2017
  43. Jul 2017
  44. Jun 2017
  45. May 2017
  46. Mar 2017
  47. Dec 2016
    1. If you wish to run more than 20 On-Demand instances, complete the Amazon EC2 instance request form.
    1. Amazon SQS can help you build a distributed application with decoupled components, working closely with the Amazon Elastic Compute Cloud (Amazon EC2) and other AWS infrastructure web services.

      Instâncias EC2 produtoras colocam mensagens em uma fila SQS p/ serem consumidas por instâncias EC2 consumidoras.

    2. With Amazon SQS, you can move data between diverse, distributed application components without losing messages and without requiring each component to be always available.

      Permite desacoplar componentes da aplicação. Acho que pode ser acessado por aplicações fora da infra do AWS

    3. Amazon SQS offers a reliable, highly-scalable, hosted queue for storing messages in transit between computers

      Serviço gerenciado com auto scaling automático e redundância. Usa polling para acesso às msg da fila.

    4. What is Amazon SQS?

      Amazon Simple Queue Service (Amazon SQS) is a web service that gives you access to message queues that store messages waiting to be processed.

    5. With Amazon SQS, you can quickly build message queuing applications that can run on any computer.
    6. Amazon Simple Queue Service (Amazon SQS) is a web service that gives you access to message queues that store messages waiting to be processed.
  48. May 2016
    1. To set an environment variable The following command sets the value of the "PARAM1" variable in the "my-env" environment to "ParamValue": aws elasticbeanstalk update-environment --environment-name my-env --option-settings Namespace=aws:elasticbeanstalk:application:environment,OptionName=PARAM1,Value=ParamValue The option-settings parameter takes a namespace in addition to the name and value of the variable. Elastic Beanstalk supports several namespaces for options in addition to environment variables.

      Glad to find this invocation, the analog to eb setenv

  49. Jan 2016
    1. Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and enable new types of cloud computing applications.

      Saving money using Amazon EC2 Spot Instances to execute Elastic MapReduce job flows

  50. Mar 2015
    1. Excellent guide for creating a fresh CoreOS image for AWS using Ext4 and OverlayFS.

      This is the future for CoreOS and should be more stable than btrfs.