** kube-apiserver**
The API Server is the only control plane component to talk to the key-value store, both to read from and to save Kubernetes cluster state information
** kube-apiserver**
The API Server is the only control plane component to talk to the key-value store, both to read from and to save Kubernetes cluster state information
The scheduler obtains from the key-value store, via the API Server, resource usage data for each worker node in the cluster.
kube-scheduler
The scheduler also receives from the API Server the new workload object's requirements (+ The scheduler also receives from the API Server the new workload object's requirements which are part of its configuration data) which are part of its configuration data
the scheduling algorithm filters the nodes, The outcome of the decision process is communicated back to the API Server, which then delegates the workload deployment with other control plane agents.
Mike Kail, CTO and cofounder at CYBRIC: “Let’s say an application environment is your old-school lunchbox. The contents of the lunchbox were all assembled well before putting them into the lunchbox [but] there was no isolation between any of those contents. The Kubernetes system provides a lunchbox that allows for just-in-time expansion of the contents (scaling) and full isolation between every unique item in the lunchbox and the ability to remove any item without affecting any of the other contents (immutability).”
k8s pod
Kubernetes中部署ELK Stack日志收集平台(上)
JAVA大军,开始把目光从spring cloud转向k8s甚至k8s+istio了么?
Tips on passing your CKA exam
test
k8s 网络手册
Kubernetes Pod Termination Lifecycle
Kubernetes RBAC 好文章
IPVS proxy mode
IP Virtual Server
Protection Vulnerability management tools will identify all known vulnerabilities in base images and packages and provide upgrade recommendations. When vulnerabilities can’t be patched or there is no patch available, providing virtual patching and other runtime protection can be useful compensating controls. For Kubernetes components, this is another reason to consider managed Kubernetes offerings, rather than rolling your own. All the major cloud providers’ managed Kubernetes offerings lock down the kubelet component by default and are not susceptible to this exploit. For those self-managing Kubernetes clusters, tools like Prisma Cloud can identify unsecure components to secure using our Kubernetes audits. Integrations with Open Policy Agent (OPA) can also prevent spinning up privileged containers and other violations of secure Kubernetes practices.
container security
The kubelet doesn't manage containers which were not created by Kubernetes.
.c2
kubelet Synopsis The kubelet is the primary "node agent" that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.
.c1
seems a interesting talk on k8s
听了一半,这里的 Builders and Operators 指的是运维而非 k8s controller 里的 operator,以后有机会再看看吧
配合自己用 kubeadm 部署一个 cluster 可能不错
even if it is being drained of workload applications.
Q: 那岂不是会出问题? 有什么事件会被触发吗?
Pods that are part of a DaemonSet tolerate being run on an unschedulable Node.
Q: 标记node为不可调度本来就不会驱逐正在运行的node吧 A: daemonset中的pod是可以在后面动态添加的
Path to credentials to authenticate itself to the API server.
那为为什么叫 kubeconfig?而不是kube-credentials-path? 参见 https://github.com/zecke/Kubernetes/blob/master/docs/user-guide/kubeconfig-file.md kubeconfig 就是存鉴权信息地方
Kubernetes keeps the object for the invalid Node and continues checking to see whether it becomes healthy.You, or a controller, must explicitly delete the Node object to stop that health checking.
有没有某种机制是用来检查这个异常状态的?
via 磊哥
maybe a good reading material on k8s
discussion on why deployment.spec.selector is immutable
Kubernetes Resource Management, QoS, Resource Quota
keynote:
resource quota related:

nice article, exactly what I am looking for
create kubeconfig file using service account
貌似是另一个 k8s 的 dashboard,页面全但是感觉使用比较复杂
可以关注
To me, abandoning all these live upgrades to have only k8s is like someone is asking me to just get rid of all error and exceptions handling and reboot the computer each time a small thing goes wrong.
the Function-as-a-Service offering often have multiple fine-grained updateable code modules (functions) running within the same vm, which comes pretty close to the Erlang model.
then add service mesh, which in some cases can do automatic retry at the network layer, and you start to recoup some of the supervisor tree advantages a little more.
really fun article though, talking about the digital matter that is code & how we handle it. great reminder that there's much to explore. and some really great works we could be looking to.
编排之争
docker 和 kubernetes 的编排之争,前提是 docker 不安于现状,不想仅仅做提供应用程序打包发布的 “幕后英雄”,而是想要进军完整的 “PaaS”,而完整的 "PaaS" 不仅仅需要应用程序,还需要提供一种
编排,集群管理,负载均衡的能力,所以,docker compose和swarm的出现也是必然,为了应对docker Paas化的冲击,kubernetes应运而生
Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.
Rook将分布式存储系统转变为自我管理,自我扩展,自我修复的存储服务。<br> 它可以自动执行存储管理员的任务:
ReplicationController(简称RC)是确保用户定义的Pod副本数保持不变
Replication Controller -- RC<br> 确保用户定义的Pod副本数保持不变,也就是说<br> 如果pod增多,则ReplicationController会终止额外的pod,如果减少,RC会创建新的pod