2 Matching Annotations
  1. Last 7 days
    1. Memory Configuration: This is where most people mess up. Pulling the standard postgres docker image won't cut it. You have to configure memory bounds with static limits that correspond to hardware. I've automated

      DB CFG postgress optimization

    2. Memory Configuration: This is where most people mess up. Pulling the standard postgres docker image won't cut it. You have to configure memory bounds with static limits that correspond to hardware. I've automated some of these configurations. But whether you do it manually or use some auto-config, tweaking these params is a must. The key parameters: shared_buffers: Start around 25 % of RAM; modern PG happily uses tens of GB. effective_cache_size: Set to 75% of system RAM (this tells Postgres how much memory the OS will use for caching) work_mem: Be conservative here. Set it to total RAM / max_connections / 2, or use a fixed value like 32MB maintenance_work_mem: Can be generous (1-2GB), only used during VACUUM and index operations Connection Management: RDS enforces their own max connections, but when self hosting you get the opportunity to choose your own: # Connection settings max_connections = 200 shared_preload_libraries = 'pg_stat_statements' log_connections = on log_disconnections = on Wahoo! More connections = more parallelism right? No such free lunch I'm afraid. Making fresh connections in postgres has pretty expensive overhead, so you almost always want to put a load balancer on front of it. I'm using pgbouncer on all my projects by default - even when load might not call for it. Python asyncio applications just work better with a centralized connection pooler. And yes, I've automated some of the config there too. Storage Tuning: NVMe SSDs make having content on disk less harmful than conventional spinning hard drives, so you'll want to pay attention to the disk type that you're hosted on: # Storage optimization for NVMe random_page_cost = 1.1 # Down from default 4.0 seq_page_cost = 1.0 # Keep at default effective_io_concurrency = 200 # Up from default 1 These settings tell Postgres that random reads are almost as fast as sequential reads on NVMe drives, which dramatically improves query planning. WAL Configuration: Write-Ahead Logging is critical for durability and performance: # WAL settings wal_level = replica # Enable streaming replication max_wal_size = 2GB # Allow larger checkpoints min_wal_size = 1GB # Prevent excessive recycling checkpoint_completion_target = 0.9 # Spread checkpoint I/O over 90% of interval

      database cfg pg opt