- Apr 2021
-
datamechanics.co datamechanics.co
-
With Spark 3.1, the Spark-on-Kubernetes project is now considered Generally Available and Production-Ready.
With Spark 3.1 k8s becomes the right option to replace YARN
-
- Mar 2021
-
blog.usejournal.com blog.usejournal.com
-
Simple … a single Linode VPS.
You might not need all the Kubernetes clusters and run well on a single Linode VPS.
Twitter thread: https://twitter.com/levelsio/status/1101581928489078784
-
-
openai.com openai.com
-
We use Prometheus to collect time-series metrics and Grafana for graphs, dashboards, and alerts.
How Prometheus and Grafana can be used to collect information from running ML on K8s
-
large machine learning job spans many nodes and runs most efficiently when it has access to all of the hardware resources on each node. This allows GPUs to cross-communicate directly using NVLink, or GPUs to directly communicate with the NIC using GPUDirect. So for many of our workloads, a single pod occupies the entire node.
The way OpenAI runs large ML jobs on K8s
-
-
openai.com openai.com
-
We use Kubernetes mainly as a batch scheduling system and rely on our autoscaler to dynamically scale up and down our cluster — this lets us significantly reduce costs for idle nodes, while still providing low latency while iterating rapidly.
-
For high availability, we always have at least 2 masters, and set the --apiserver-count flag to the number of apiservers we’re running (otherwise Prometheus monitoring can get confused between instances).
Tip for high availability:
- have at least 2 masters
- set
--apiserver-count
flag to the number of running apiservers
-
We’ve increased the max etcd size with the --quota-backend-bytes flag, and the autoscaler now has a sanity check not to take action if it would terminate more than 50% of the cluster.
If we've more than 1k nodes, etcd's hard storage limit might stop accepting writes
-
Another helpful tweak was storing Kubernetes Events in a separate etcd cluster, so that spikes in Event creation wouldn’t affect performance of the main etcd instances.
Another trick apart from tweaking default settings of Fluentd & Datadog
-
The root cause: the default setting for Fluentd’s and Datadog’s monitoring processes was to query the apiservers from every node in the cluster (for example, this issue which is now fixed). We simply changed these processes to be less aggressive with their polling, and load on the apiservers became stable again:
Default settings of Fluentd and Datadog might not be suited for running many nodes
-
We then moved the etcd directory for each node to the local temp disk, which is an SSD connected directly to the instance rather than a network-attached one. Switching to the local disk brought write latency to 200us, and etcd became healthy!
One of the solutions for etcd using only about 10% of the available IOPS. It was working till about 1k nodes
-
- Dec 2020
-
www.pidramble.com www.pidramble.com
Tags
Annotators
URL
-
- Nov 2020
-
stackoverflow.com stackoverflow.com
-
Docker Swarm has lost. Kubernetes has won. My advice? use docker-compose.yml was development only, stick to version: 2.4 and forget 3 exists :+1
-
- Oct 2020
-
marketplace.visualstudio.com marketplace.visualstudio.com
-
www.spectrocloud.com www.spectrocloud.com
-
Kubernetes doesn’t have the ability to schedule and manage GPU resources
But it's provided as a plugin
Tags
Annotators
URL
-
- May 2020
-
docs.gitlab.com docs.gitlab.com
- Apr 2020
-
www.reddit.com www.reddit.com
-
It's responsible for allocating and scheduling containers, providing then with abstracted functionality like internal networking and file storage, and then monitoring the health of all of these elements and stepping in to repair or adjust them as necessary.In short, it's all about abstracting how, when and where containers are run.
Kubernetes (simple explanation)
-
-
-
You’ll see pressure to push towards “Cloud neutral” solutions using Kubernetes in various places
Maybe Kubernetes has the advantage of being cloud neutral, but: you pay the cost of a cloud migration:
- maintaining abstractions
- isolating your way from useful vendor specific features
-
Heroku? App Services? App Engine?
You can set up yourself in production in minutes to only a few hours
-
Kubernetes (often irritatingly abbreviated to k8s, along with it’s wonderful ecosystem of esoterically named additions like helm, and flux) requires a full time ops team to operate, and even in “managed vendor mode” on EKS/AKS/GKS the learning curve is far steeper than the alternatives.
Kubernetes:
- require a full time ops team to operate
- the learning curve is far steeper than the alternatives
-
Azure App Services, Google App Engine and AWS Lambda will be several orders of magnitude more productive for you as a programmer. They’ll be easier to operate in production, and more explicable and supported.
Use the closest thing to a pure-managed platform as you possibly can. It will be easier to operate in production, and more explicable and supported:
- Azure App Service
- Google App Engine
- AWS Lambda
-
With the popularisation of docker and containers, there’s a lot of hype gone into things that provide “almost platform like” abstractions over Infrastructure-as-a-Service. These are all very expensive and hard work.
Kubernetes aren't always required unless you work on huge problems
-
- Mar 2020
-
pythonspeed.com pythonspeed.com
-
from Docker Compose on a single machine, to Heroku and similar systems, to something like Snakemake for computational pipelines.
Other alternatives to Kubernetes:
- Docker Compose on a single machine
- Heroku and similar systems
- Snakemake for computational pipelines
-
if what you care about is downtime, your first thought shouldn’t be “how do I reduce deployment downtime from 1 second to 1ms”, it should be “how can I ensure database schema changes don’t prevent rollback if I screw something up.”
Caring about downtime
-
The features Kubernetes provides for reliability (health checks, rolling deploys), can be implemented much more simply, or already built-in in many cases. For example, nginx can do health checks on worker processes, and you can use docker-autoheal or something similar to automatically restart those processes.
Kubernetes' health checks can be replaced with nginx on worker processes + docker-autoheal to automatically restart those processes
-
Scaling for many web applications is typically bottlenecked by the database, not the web workers.
-
Kubernetes might be useful if you need to scale a lot. But let’s consider some alternatives
Kubernetes alternatives:
- cloud VMs with up to 416 vCPUs and 8 TiB RAM
- scale many web apps with Heroku
-
Distributed applications are really hard to write correctly. Really. The more moving parts, the more these problems come in to play. Distributed applications are hard to debug. You need whole new categories of instrumentation and logging to getting understanding that isn’t quite as good as what you’d get from the logs of a monolithic application.
Microservices stay as a hard nut to crack.
They are fine for an organisational scaling technique: when you have 500 developers working on one live website (so they can work independently). For example, each team of 5 developers can be given one microservice
-
you need to spin up a complete K8s system just to test anything, via a VM or nested Docker containers.
You need a complete K8s to run your code, or you can use Telepresence to code locally against a remote Kubernetes cluster
-
“Kubernetes is a large system with significant operational complexity. The assessment team found configuration and deployment of Kubernetes to be non-trivial, with certain components having confusing default settings, missing operational controls, and implicitly defined security controls.”
Deployment of Kubernetes is non-trivial
-
Before you can run a single application, you need the following highly-simplified architecture
Before running the simplest Kubernetes app, you need at least this architecture:
-
the Kubernetes codebase has significant room for improvement. The codebase is large and complex, with large sections of code containing minimal documentation and numerous dependencies, including systems external to Kubernetes.
As of March 2020, the Kubernetes code base has more than 580 000 lines of Go code
-
Kubernetes has plenty of moving parts—concepts, subsystems, processes, machines, code—and that means plenty of problems.
Kubernetes might be not the best solution in a smaller team
-
- Feb 2020
-
github.com github.com
Tags
Annotators
URL
-
- Jan 2020
-
console.cloud.google.com console.cloud.google.com
-
Knative
Tags
Annotators
URL
-
-
knative.dev knative.devKnative1
-
Kubernetes
Tags
Annotators
URL
-
- Jul 2019
-
engineering.dollarshaveclub.com engineering.dollarshaveclub.com
- Jun 2019
- May 2019
-
kubernetes.io kubernetes.io
-
Installing runtime
apt-get install -y docker.io
-
apt-get update && apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl
Install Docker container runtime first.
apt-get install -y docker.io
-
-
kubernetes.io kubernetes.io
-
Joining your nodes
Install runtime.
sudo -i apt-get update && apt-get upgrade -y apt-get install -y docker.io
Install kubeadm, kubelet and kubectl.
https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl
apt-get update && apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl
-
- Apr 2019
- Mar 2019
-
testdriven.io testdriven.io
-
www.shapeblock.com www.shapeblock.com
-
OpenShift vs Kubernetes
Tags
Annotators
URL
-
-
www.thedevelopersconference.com.br www.thedevelopersconference.com.br
-
Pipeline de CI/CD no Kubernetes usando Jenkins e Spinnaker
Uau! Muitos assuntos da prova LPI DevOps são explorados nessa palestra. Fica de olho no tópico: 702 Container Management.
-
- Feb 2019
-
github.com github.com
- Jan 2019
- Dec 2018
-
github.com github.com
-
offlinehacker.github.io offlinehacker.github.io
-
rzetterberg.github.io rzetterberg.github.io
Tags
Annotators
URL
-
- Jan 2018
- Jul 2017
-
www.infoq.com www.infoq.com
-
这张图给出了谷歌在2015年提出的Inception-v3模型。这个模型在ImageNet数据集上可以达到95%的正确率。然而,这个模型中有2500万个参数,分类一张图片需要50亿次加法或者乘法运算。
95%成功率,需要 25,000,000个参数!
-