- Apr 2020
-
www.reddit.com www.reddit.com
-
It's responsible for allocating and scheduling containers, providing then with abstracted functionality like internal networking and file storage, and then monitoring the health of all of these elements and stepping in to repair or adjust them as necessary.In short, it's all about abstracting how, when and where containers are run.
Kubernetes (simple explanation)
-
-
-
You’ll see pressure to push towards “Cloud neutral” solutions using Kubernetes in various places
Maybe Kubernetes has the advantage of being cloud neutral, but: you pay the cost of a cloud migration:
- maintaining abstractions
- isolating your way from useful vendor specific features
-
Heroku? App Services? App Engine?
You can set up yourself in production in minutes to only a few hours
-
Kubernetes (often irritatingly abbreviated to k8s, along with it’s wonderful ecosystem of esoterically named additions like helm, and flux) requires a full time ops team to operate, and even in “managed vendor mode” on EKS/AKS/GKS the learning curve is far steeper than the alternatives.
Kubernetes:
- require a full time ops team to operate
- the learning curve is far steeper than the alternatives
-
Azure App Services, Google App Engine and AWS Lambda will be several orders of magnitude more productive for you as a programmer. They’ll be easier to operate in production, and more explicable and supported.
Use the closest thing to a pure-managed platform as you possibly can. It will be easier to operate in production, and more explicable and supported:
- Azure App Service
- Google App Engine
- AWS Lambda
-
With the popularisation of docker and containers, there’s a lot of hype gone into things that provide “almost platform like” abstractions over Infrastructure-as-a-Service. These are all very expensive and hard work.
Kubernetes aren't always required unless you work on huge problems
-
By using events that are buffered in queues, your system can support outage, scaling up and scaling down, and rolling upgrades without any special consideration. It’s normal mode of operation is “read from a queue”, and this doesn’t change in exceptional circumstances.
Event driven architectures with replay / message logs
-
Reserved resources, capacity, or physical hardware can be protected for pieces of your software, so that an outage in one part of your system doesn’t ripple down to another.
Idea of Bulkheads
-
The complimentary design pattern for all your circuit breakers – you need to make sure that you wrap all outbound connections in a retry policy, and a back-off.
Idempotency and Retries design pattern
-
Circuit breaking is a useful distributed system pattern where you model out-going connections as if they’re an electrical circuit. By measuring the success of calls over any given circuit, if calls start failing, you “blow the fuse”, queuing outbound requests rather than sending requests you know will fail.
Circuit breaking - useful distributed system pattern. It's phenomenal way to make sure you don't fail when you know you might.
-
“scaling out is the only cost-effective thing”, but plenty of successful companies managed to scale up with a handful of large machines or VMs
-
Scaling is hard if you try do it yourself, so absolutely don’t try do it yourself. Use vendor provided, cloud abstractions like Google App Engine, Azure Web Apps or AWS Lambda with autoscaling support enabled if you can possibly avoid it.
Scaling shall be done with cloud abstractions
-
Hexagonal architectures, also known as “the ports and adapters” pattern
Hexagonal architectures - one of the better pieces of "real application architecture" advice.
- have all your logic, business rules, domain specific stuff - exist in a form that isn't tied to your frameworks, your dependencies, your data storage, your message busses, your repositories, or your UI
- all your logic is in files, modules or classes that are free from framework code, glue, or external data access
- it means you can test everything in isolation, without your web framework or some broken API getting in the way
-
Good microservice design follows a few simple rules
Microservice design rules:
- Be role/operation based, not data centric
- Always own your data store
- Communicate on external interfaces or messages
- What changes together, and is co-dependent, is actually the same thing
- All services are fault tolerant and survive the outages of their dependencies
-
What Microservices are supposed to be: Small, independently useful, independently versionable, independently shippable services that execute a specific domain function or operation. What Microservices often are: Brittle, co-dependent, myopic services that act as data access objects over HTTP that often fail in a domino like fashion.
What Microservices are supposed to be: independent
VS
what they often are: dependent
-
In the mid-90s, “COM+” (Component Services) and SOAP were popular because they reduced the risk of deploying things, by splitting them into small components
History of Microservices:
- Came from COM+ and SOAP in the mid-90s.
- Which later led to the popularisation of N-Tier ("split up the data-tier, the business-logic-tier and the presentation-tier"). This worked for some people, but the horizontal slices through a system often required changing every “tier” to finish a full change.
- Product vendors got involved and SOAP became complicated and unfashionable, which pushed people towards the "second wave" - Guerrilla SOA. This led to the proliferation of smaller, more nimble services.
- The "third wave" of SOA, branded as Microservice architecture is very popular, but often not well understood
-
all a design pattern is, is the answer to a problem that people solve so often, there’s an accepted way to solve it
“Design patterns are just bug fixes for your programming languages”
"they’re just the 1990s version of an accepted and popular stackoverflow answer"
-
Let’s do a quick run through of some very common ones:
most common design patterns:
- MVC
- ORM
- Active Record
- Repository
- Decorator
- Dependency Injection
- Factory
- Adapter
- Command
- Strategy
- Singleton
-
CDNs are web servers run by other people, all over the world. You upload your data to them, and they will replicate your data across all of their “edges” (a silly term that just means “to all the servers all over the world that they run”) so that when someone requests your content, the DNS response will return a server that’s close to them, and the time it takes them to fetch that content will be much quicker.
CDNs (Content Delivery Networks)
Offloading to a CDN is one of the easiest ways you can get extra performance for a very minimal cost
-
Understanding how a distributed cache works is remarkably simple – when an item is added, the key (the thing you use to retrieve that item) that is generated includes the address or name of the computer that’s storing that data in the cache. Generating keys on any of the computers that are part of the distributed cache cluster will result in the same key. This means that when the client libraries that interact with the cache are used, they understand which computer they must call to retrieve the data.
Distributed caching (simple explanation)
-
Memory caches are used to store the result of something that is “heavy” to calculate, takes time, or just needs to be consistent across all the different computers running your server software. In exchange for a little bit of network latency, it makes the total amount of memory available to your application the sum of all the memory available across all your servers.
Distributed caching:
- use to store results of heavy calculation
- improve consistency
- prevents "cache sampedes"
- all the major hosting providers tend to support memcached or redis compatible managed caches
-
All a load balancer does, is accept HTTP requests for your application (or from it), pick a server that isn’t very busy, and forward the request.
Load balancers:
- used when you've a lot of traffic and you're not using PaaS
- mostly operated by sysops, or are just running copies of NGINX
- you might see load balancers load balancing a "hot path" in your software onto a dedicated pool of hardware to keep it safe or isolate from failure
- you might see load balancers used to take care of SSL certificates for you (SSL Termination)
-
GraphQL gels especially well with modern JavaScript driven front ends, with excellent tools like Apollo and Apollo-Server that help optimise these calls by batching requests.
GraphQL + Apollo or Apollo-Server
-
BFF is an API that serves one, and specifically only one application
BFF (backend for frontend):
- translates an internal domain into the terminal language of the app it serves
- takes care of authentication, rate limiting, and other stuff you don't want to do more than once
- reduces needless roundtrips to the server
- translates data to be more suitable for its target app
- is pretty front-end-dev friendly. Queries and schema strongly resemble JSON & it keeps your stack "JS all the way down" without being beholden to some distant backend team
-
GraphQL really is just a smart and effective way to schema your APIs, and provide a BFF – that’s backend for frontend
-
What sets GraphQL apart a little from previous approaches (notably Microsofts’ OData) is the idea that Types and Queries are implemented with Resolver code on the server side, rather than just mapping directly to some SQL storage.
What makes GraphQL favorable over other implementations.
In result:
- GraphQL can be a single API over a bunch of disparate APIs in your domain
- it solves the "over fetching" problem that's quite common in REST APIs (by allowing the client to specify a subset of data to return)
- acts as an anti-corruption layer of sorts, preventing unbounded access to underlying storage
- GraphQL is designed to be the single point of connection that your web or mobile app talks to (it highly optimises performance)
-
GraphQL is confusingly, a Query Language, a standard for HTTP APIs and a Schema tool all at once.
GraphQL
-
Use a swagger or OpenAPI library to generate a schema and you’re pretty much doing what most people are doing.
Generating RESTful schema
-
JSON-RPC is “level 0” of the Richardson Maturity Model – a model that describes the qualities of a REST design.
JSON-RPC.
You can build it by:
- using HTTP VERBs correctly
- organising your APIs into logical "resources", e.g. ("customer", "product", "catalogue")
- using correct HTTP response codes for interactions with your API
-
REST is a good architectural style (and it is, a lot of the modern naysaying about REST is relatively uninformed and not too dissimilar to the treatment SOAP had before it)
REST is still a good architectural style
-
there was a push for standardisation of “SOAP” (simple object access protocol) APIs
Standarisation of SOAP brought a lot of good stuff but people found XML cumbersome to read.
A lot of things being solved in SOAP had to subsequently be re-solved on top of JSON using emerging open-ish standards like Swagger (now OpenAPI) and JSON:API
-
The basics of HTTP are easy to grasp – there’s a mandatory “request line”
Mandatory HTTP request line:
- verb (GET, POST, PUT and HEAD most frequently)
- URL (web address)
- protocol version (HTTP/1.1)
Then, there's a bunch of optional request header fields.
Example HTTP request:
GET http://www.davidwhitney.co.uk/ HTTP/1.1 Host: www.davidwhitney.co.uk Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64… Accept: text/html,application/xhtml+xml,application/xml;q=0.9 Accept-Encoding: gzip, deflate Accept-Language: en-GB,en-US;q=0.9,en;q=0.8
Example response:
HTTP/1.1 200 OK Cache-Control: public,max-age=1 Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Kestrel X-Powered-By: ASP.NET Date: Wed, 11 Dec 2019 21:52:23 GMT Content-Length: 8479 <!DOCTYPE html> <html lang="en">...
-
web is an implementation of the design pattern REST – which stands for “Representational State Transfer”. You’ll hear people talk about REST a lot – it was originally defined by Roy Fielding in his PhD dissertation, but more importantly was a description of the way HTTP/1.0 worked at the time
Origins of the REST design pattern
-
So the web is RESTful by default. REST describes the way HTTP works.
-
Most APIs you use, or build will be “REST-ish”.
- You'll be issuing the same kind of "HTTP requests" as the browser
- Mostly you'll get JSON responses (sometimes XML)
- You can describe these APIs as JSON-RPC or XML-RPC
-
Honestly for the most part it’s a matter of taste, and they’re all perfectly appropriate ways to build web applications.
Which model to choose:
- Server-rendered MVC - good for low-interactivity websites. Web apps on the other side have their complexity cost
- SPAs (React, Angular, Vue) offer high fidelity UX. The programming models work well for responsive UX
- Static sites - great for blogs, marketing microsites, CMS (anything where content is the most important). They scale well, basically cannot crash, and are cheap to run
-
static site generators became increasingly popular – normally allowing you to use your normal front-end web dev stack, but then generating all the files using a build tool to bundle and distribute to dumb web servers or CDNs
Examples of static site generators: Gatsby, Hugo, Jekyll, Wyam
-
MVVM is equally common in single page apps where there’s two way bindings between something that provides data (the model) and the UI (which the view model serves).
-
SPAs are incredibly common, popularised by client-side web frameworks like Angular, React and Vue.js.
SPAs:
- popularised by client-side web frameworks like Angular, React and Vue.js
- real difference between the MVP app is shifting most of its work on the client side
- there's client side MVC, MVVM (model-view-view-model) and FRP (functional reactive programming)
Angular - client side MVC framework following its pattern, except it's running inside the users web browser.
React - implementation of FRP. A little more flexible, but more concerned with state change events in data (often using some event store like Redux)
-
http://www.mycoolsite.com/Home/Index
With such a website, MVC model would try to:
- find a "
HomeController
" file/module (depending on programming language) inside the controllers directory. - "
Index
" function would probably exist and would return a model (some data) that would be rendered by a view (HTML template from the views folder)
(All the different frameworks do this slightly differently, but the core idea stays the same – features grouped together by controllers, with functions for returning pages of data and handling input from the web)
- find a "
-
when most people say “MVC” they’re really describing “Rails-style” MVC apps where your code is organised into a few different directories
Different directories of the MVC pattern:
- /controllers
- /models
- /views
-
most people stick to a handful of common patterns
Common patterns:
- The MVC App
- The Single Page App with an API
- Static Sites Hosted on a CDN or other dumb server
- Something else...
-
There are tonne of general purpose web servers out there
Realistically, you'll see a mixture of:
- Apache
- NGINX
- Microsoft IIS
- along with some development stack specific web servers (Node.js, ASP.NET CORE for C#, HTTP4K for Kotlin)
-
“web servers”, implement the “HTTP protocol” – a series of commands you can send to remote computers – that let you say “hey, computer, send me that document”
Web servers (simple explanation)
-