4 Matching Annotations
  1. Nov 2025
    1. AWS is 10x slower than a dedicated server for the same price
      • Video Title: AWS is 10x slower than a dedicated server for the same price
      • Core Argument: Cloud providers, particularly AWS, charge significantly more for base-level compute instances than traditional Virtual Private Server (VPS) providers while delivering substantially less performance. The video argues that horizontal scaling is often unnecessary for 95% of businesses.
      • Comparison Setup: The video compared an entry-level AWS instance (EC2 and ECS Fargate) with a similarly specced VPS (1 vCPU, 2 GB RAM) from a popular German provider (Hetzner, referred to as HTNA in the video) using the Sysbench tool.
      • AWS EC2 Results: The base EC2 instance cost almost 3 times more than the VPS but delivered poor performance:
        • CPU: Approximately 20% of the VPS performance.
        • Memory: Only 7.74% of the VPS performance.
      • AWS ECS Fargate Results: Using the "serverless" Fargate option, setup was complex and involved many AWS services (ECS, ECR, IAM).
        • Cost: The instance was 6 times more expensive than the VPS.
        • Performance: Performance improved over EC2 but was still slower and less consistent: 23% (CPU), 80% (Memory), and 84% (File I/O) of the VPS's performance, with fluctuations up to 18%.
      • Cost Efficiency: A dedicated VPS server with 4vCPU and 16 GB of RAM was found to be cheaper than the 1 vCPU ECS Fargate task used in the test.
      • Conclusion: For a similar price point, a dedicated server is about 10 times faster than an equivalent AWS cloud instance. The video concludes that AWS's dominance is due to its large marketing spend, not superior technical or cost efficiency. A real-world example cited is Lichess, which supports 5.2 million chess games per day on a single dedicated server [00:12:06].

      Hacker News Discussion

      The discussion was split between criticizing the video's methodology and debating the fundamental value proposition of hyperscale cloud providers versus traditional hosting.

      • Criticism of Methodology: Several top comments argued the video was a "low effort 'ha ha AWS sucks' video" with an "AWFUL analysis." Critics suggested the author did not properly configure or understand ECS/Fargate and that comparing the lowest-end shared instances isn't a "proper comparison," which should involve mid-range hardware and careful configuration.
      • The Value of AWS Services: Many users defended AWS by stating that customers rarely choose it just for the base EC2 instance price. The true value lies in the managed ecosystem of services like RDS, S3, EKS, ELB, and Cognito, which abstract away operational complexity and allow large customers to negotiate off-list pricing.
      • Complexity and Cost Rebuttals: Counter-arguments highlighted that managing AWS complexity often requires hiring expensive "cloud wizards" (Solutions Architects or specialized DevOps staff), shifting the high cost of a SysAdmin team to high cloud management costs. Anecdotes about sudden huge AWS bills and complex debugging were common.
      • The "Nobody Gets Fired" Factor: The most common justification for choosing AWS, even at a higher cost, is risk aversion and the avoidance of personal liability. If a core AWS region (like US-East-1) goes down, it's a shared industry failure, but if a self-hosted server fails, the admin is solely responsible for fixing it at 3 a.m.
      • Alternative Recommendations: The discussion frequently validated the use of non-hyperscale providers like Hetzner and OVH for significant cost savings and comparable reliability for many non-"cloud native" workloads.
    1. How we slashed our EKS costs by 43% with one simple scheduler tweak 🚀
      • AWS EKS costs can escalate due to massive, parallel workloads in life sciences/drug development (e.g., genomic sequencing, molecular modeling).
      • Default Kubernetes scheduler uses leastAllocated strategy, spreading pods across many nodes for fairness/high availability.
      • leastAllocated strategy causes many partially utilized nodes, preventing autoscalers from scaling down idle nodes, increasing costs.
      • mostAllocated scheduling strategy "packs" pods onto fewer nodes, maximizing utilization and enabling autoscalers like Karpenter to remove idle nodes.
      • Switching to mostAllocated can reduce runtime costs significantly (e.g., ~10% in UAT, 43% in PROD environments).
      • Custom scheduler deployment on AWS EKS requires creating a service account, ClusterRoleBindings, RoleBinding, a ConfigMap with the mostAllocated scoring strategy, and a deployment with a matching Kubernetes version container image.
      • Resource weights can prioritize packing of expensive resources (e.g., high weight on GPUs for ML workloads).
      • Testing in non-production environments is recommended before full rollout.
      • Implementing mostAllocated scheduling can dramatically optimize costs by enabling cluster autoscalers to shut down unused nodes.
  2. Nov 2024
    1. Optimizing Kubernetes Costs with Multi-Tenancy and Virtual Clusters

      The blog post by Cliff Malmborg from Loft Labs discusses optimizing Kubernetes costs using multi-tenancy and virtual clusters. With Kubernetes expenses rising rapidly at scale, traditional cost-saving methods like autoscaling, resource quotas, and monitoring tools help but are not enough for complex environments where underutilized clusters are common. Multi-tenancy enables resource sharing, reducing the number of clusters and, in turn, management and operational costs.

      A virtual cluster is a fully functional Kubernetes cluster running within a larger host cluster, providing better isolation and flexibility than namespaces. Unlike namespaces, each virtual cluster has its own Kubernetes control plane, so resources like statefulsets and webhooks are isolated within it, while only core resources (like pods and services) are shared with the host cluster. This setup addresses the "noisy neighbor" problem, where workloads in a shared environment interfere with each other due to resource contention.

      Virtual clusters offer the isolation benefits of individual physical clusters but are cheaper and easier to manage than deploying separate physical clusters for each tenant or application. They also support "sleep mode," automatically scaling down unused resources to save costs, and allow shared use of central tools (like ingress controllers) installed in the host cluster. By transitioning to virtual clusters, companies can balance security, isolation, and cost-effectiveness, reducing the need for multiple physical clusters and making Kubernetes infrastructure scalable for modern, resource-demanding applications.

  3. Aug 2024
    1. Slashing Data Transfer Costs in AWS by 99%

      The essence of cutting AWS data transfer costs by 99% is to use Amazon S3 as an intermediary for data transfers between EC2 instances in different Availability Zones (AZs). Instead of direct transfers, which incur significant costs, you upload the data to S3 (free upload), and then download it within the same region (free download). By keeping the data in S3 only temporarily, you minimize storage costs, drastically reducing overall transfer expenses.