To install or access apps on Akaunting, you need your API key. Here’s how to get the API key.
The API key is needed to get different apps (chunks of functionality), so that's the answer.
To install or access apps on Akaunting, you need your API key. Here’s how to get the API key.
The API key is needed to get different apps (chunks of functionality), so that's the answer.
Getting the API Key Estimated reading: 1 minute The API Key holds details of your Akaunting plan.
Ah, this is how they enforce limits. You must have an account to get an API key. Even if you don't interact with them?
All installation instructions, straightforward. Although I don't get why then the later refs to dependency issues, if it's as basic as this
While installing Akaunting on your MacBook (Apple Silicon), you may experience an NPM Install Error. Here’s a reference on how to resolve the error.
Akaunting on my Macbook may throw an npm install error.
solution to install error wrt npm on Apple silicon w akaunting. Not sure if this will arise.
The reqs are basic. Do check if my MAMP has all the mentioned extensions. They assume a hosting package, but for my usage the local webserver is fine.
You can run a local version on-prem, which will be limited to a single company, and 1 user. Under 1k invoices / yr. How would they limit that? Phoning home? As you can purchase extensions.
bookkeeping foss. Not immediately clear if self-hosting possible.
https://web.archive.org/web/20251229121559/https://pierce.dev/notes/go-ahead-self-host-postgres
Posting explaining selfhosting db server not all that difficult or hassle.
By Pierce Freeman, USA SF based ML researcher & systems engineer
I'm not advocating that everyone should self-host everything. But the pendulum has swung too far toward managed services. There's a large sweet spot where self-hosting makes perfect sense, and more teams should seriously consider it. Start small. If you're paying more than $200/month for RDS, spin up a test server and migrate a non-critical database. You might be surprised by how straightforward it is. The future of infrastructure is almost certainly more hybrid than it's been recently: managed services where they add genuine value, self-hosted where they're just expensive abstractions. Postgres often falls into the latter category. Footnotes They're either just hosting a vanilla postgres instance that's tied to the deployed hardware config, or doing something opaque with edge deploys and sharding. In the latter case they near guarantee your DB will stay highly available but costs can quickly spiral out of control. ↩ Maybe up to billions at this point. ↩ Even on otherwise absolutely snail speed hardware. ↩ This was Jeff Bezos's favorite phrase during the early AWS days, and it stuck. ↩ Similar options include OVH, Hetzner dedicated instances, or even bare metal from providers like Equinix. ↩ AWS RDS & S3 has had several major outages over the years. The most memorable was the 2017 US-East-1 outage that took down half the internet. ↩
Cloud hosting can become an expensive abstraction layer quickly. I also think there's an entire generation of coders / engineers who treat silo'd cloudhosting as a given, without considering other options and their benefits. Large window for selfhosting in which postgres almost always falls
Write-Ahead Logging is critical for durability and performance:
WAL config also needs attention in postgres selfhosting
Storage Tuning: NVMe SSDs make having content on disk less harmful than conventional spinning hard drives, so you'll want to pay attention to the disk type that you're hosted on:
storage tuning is a selfhosting postgres concern too
Making fresh connections in postgres has pretty expensive overhead, so you almost always want to put a load balancer on front of it. I'm using pgbouncer on all my projects by default - even when load might not call for it. Python asyncio applications just work better with a centralized connection pooler.
Postgres parallel connections is something you want to stay on top of. load balancing needed
Memory Configuration: This is where most people mess up. Pulling the standard postgres docker image won't cut it. You have to configure memory bounds with static limits that correspond to hardware. I've automated some of these configurations. But whether you do it manually or use some auto-config, tweaking these params is a must.
Selfhosting Postgres requires to set static limits wrt memory.
When self-hosting doesn't make sense I'd argue self-hosting is the right choice for basically everyone, with the few exceptions at both ends of the extreme: If you're just starting out in software & want to get something working quickly with vibe coding, it's easier to treat Postgres as just another remote API that you can call from your single deployed app If you're a really big company and are reaching the scale where you need trained database engineers to just work on your stack, you might get economies of scale by just outsourcing that work to a cloud company that has guaranteed talent in that area. The second full freight salaries come into play, outsourcing looks a bit cheaper. Regulated workloads (PCI-DSS, FedRAMP, HIPAA, etc.) sometimes require a managed platform with signed BAAs or explicit compliance attestations.
Sees use for silo'd postgres hosting on the extremes of the spectrum: when you start without knowledge and are vibecoding, so you can treat the database as just another API, and when you are megacorp (outsourcing looks cheaper quickly if you have to otherwise pay multiple FTE salaries otherwise), or/and have to prove regulatory compliance.