centos7.9安装k8s

发布时间:2022-03-01 09:37:40 作者:yexindonglai@163.com 阅读(1087)

前言

至少准备三台centos服务器,其中一台为master节点,两台work节点;centos系统版本为7.5或以上版本;我这里使用的是7.9,除此之外,还需要一些额外的条件

  • 至少2核2G的配置(单核不行的,我试过了)

一、k8s环境准备

运行k8s的服务需要具备以下条件

  1. 必须是基于Debian和Red Hat的linux发行版以及一些不提供句管理的发行版,这些系统才提供通用指令
  2. 每台主机至少具备2G内存;2核CPU
  3. 最好关闭防火墙
  4. 节点中不能有重复的主机名、mac地址或product_uuid;

接下来,在所有的节点中配置和安装以下几项

1、每个系统都设置唯一的静态ip

用vi编辑器打开网卡配置/etc/sysconfig/network-scripts/ifcfg-ens33

  1. # 将 BOOTPROTO 改为static
  2. BOOTPROTO=static
  3. # 添加ip、网关和DNS地址,网关可以通过命令:“netstat -rn” 查看
  4. IPADDR=192.168.253.131
  5. GATEWAY=192.168.253.2
  6. DNS1=8.8.8.8

2、时间同步

k8s要求集群中的节点必须精确一致,所以直接使用chronyd从网络同步时间

  1. # 启动chronyd服务
  2. systemctl start chronyd
  3. # 设为开机自启
  4. systemctl enable chronyd
  5. # 查看当前时间
  6. date

3、重新设置主机名

在k8s中, 主机名不能重复,所以将其设为不一样的节点

  1. # 主节点
  2. hostnamectl set-hostname master
  3. # 工作节点1
  4. hostnamectl set-hostname node1
  5. # 工作节点2
  6. hostnamectl set-hostname node2

4、设置hosts域名映射

在所有节点上执行以下命令,在hosts文件中添加一个指向主节点的域名,这里不要照抄,要将ip改成你自己的ip

  1. # 在所有节点执行
  2. echo "192.168.253.131 cluster-endpoint" >> /etc/hosts

在初始化的时候需要用到master,所以需要hosts加入以下配置,对应的主机名要和上面第三步的一样

  1. # 以下操作只在主节点操作
  2. echo "192.168.253.131 master" >> /etc/hosts
  3. echo "192.168.253.132 node1" >> /etc/hosts
  4. echo "192.168.253.133 node2" >> /etc/hosts

4、禁用SELINUX

SELinux或Security-Enhanced Linux是提供访问控制安全策略的机制或安全模块,简而言之,它是一项功能或服务,用于将用户限制为系统管理员设置的某些政策和规则。 这里需要将SELINUX设为permissive 模式(相当于禁用)

  1. # 临时禁用,重启后复原,也可以用 setenforce Permissive 命令,效果是一样的
  2. setenforce 0
  3. # 永久禁用,
  4. sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

5、关闭swap分区

swap分区是虚拟内存,Swap分区在系统的物理内存不够用的时候,把硬盘内存中的一部分空间释放出来,以供当前运行的程序使用,关闭虚拟内存可以提高k8s的性能;

  1. #临时关闭swap分区, 重启失效;
  2. swapoff -a
  3. #永久关闭swap分区
  4. sed -ri 's/.*swap.*/#&/' /etc/fstab

6、允许 iptable 检查桥接流量

网卡上可能有ipv6的流量,将ipv6的流量桥接到ipv4网卡上,方便统计;
将以下内容一起复制到命令行执行

  1. cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
  2. br_netfilter
  3. EOF
  4. cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
  5. net.bridge.bridge-nf-call-ip6tables = 1
  6. net.bridge.bridge-nf-call-iptables = 1
  7. EOF

让以上配置生效

  1. sudo sysctl --system

7、禁用iptables 和 防火墙服务

k8s和docker在运行中会产生大量的iptables规则, 为了不让系统规则跟它们混淆,直接关闭系统的规则

  1. # 关闭iptables,没这个服务可忽略
  2. systemctl stop iptables
  3. systemctl disable iptables
  4. # 关闭防火墙
  5. systemctl stop firewalld
  6. systemctl disable firewalld

8、配置ipvs功能

在Kubernetes中Service有两种带来模型,一种是基于iptables的,一种是基于ipvs的两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块

1.安装ipset和ipvsadm
  1. yum install ipset ipvsadmin -y
2.添加需要加载的模块写入脚本文件
  1. cat <<EOF> /etc/sysconfig/modules/ipvs.modules
  2. #!/bin/bash
  3. modprobe -- ip_vs
  4. modprobe -- ip_vs_rr
  5. modprobe -- ip_vs_wrr
  6. modprobe -- ip_vs_sh
  7. modprobe -- nf_conntrack_ipv4
  8. EOF
3.为脚本添加执行权限
  1. chmod +x /etc/sysconfig/modules/ipvs.modules
4.执行脚本文件
  1. /bin/bash /etc/sysconfig/modules/ipvs.modules
5.查看对应的模块是否加载成功
  1. lsmod | grep -e ip_vs -e nf_conntrack_ipv4

二、安装docker

首先,我们需要在每台服务器上都安装docker的运行环境;

1、卸载原docker

  1. # 查看已安装的docker
  2. yum list installed | grep docker
  3. # 卸载docker相关组件
  4. yum -y remove docker*
  5. # 除了docker之外,也需要将containerd.io 卸载,这是容器相关的组件
  6. docker -y remove containerd.io.x86_64
  7. # 删除docker目录
  8. rm -rf /var/lib/docker

2、使用国内的阿里云镜像仓

  1. # 使用国内的阿里云镜像仓库-- 比较快 ,建议使用这个
  2. yum-config-manager \
  3. --add-repo \
  4. http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3、安装指定版本的docker

  1. yum install \
  2. docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6 -y

4、使用systemd代替cgroupfs、以及配置仓库镜像地址

docker在默认情况下使用Cgroup Driver 为cgroupfs,而k8s推荐使用systemd来代替cgroupfs,所以在/etc/docker/daemon.json内加入以下内容,如果没有这个文件,手动创建一个

  1. {
  2. "registry-mirrors": ["https://q2hy3fzi.mirror.aliyuncs.com"],
  3. "exec-opts": ["native.cgroupdriver=systemd"]
  4. }

5、启动docker

  1. # 第一种:启动docker
  2. systemctl start docker
  3. # 第二种:启动docker并设为开机启动
  4. systemctl enable docker --now

三、安装k8s三大件

以下操作需要在所有的节点上进行

1、 安装 kubelet 、kubeadm、 kubectl

如果使用k8s官网上的地址,将会导致下载很慢, 所以这里建议使用阿里镜像安装

  1. # 配置源
  2. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  3. [kubernetes]
  4. name=Kubernetes
  5. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  6. enabled=1
  7. gpgcheck=0
  8. repo_gpgcheck=0
  9. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  10. EOF

说明

  • enabled=1 : 开启
  • gpgcheck=0 : 是否开启gpg签名,1开启,0关闭
  • repo_gpgcheck=0 : 是否检查元数据信息文件的签名信息与完整性,1开启,0关闭

2、安装三大件

--disableexcludes=kubernetes表示排除禁用

  1. yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

3、添加配置

/etc/sysconfig/kubelet文件中加入以下内容

  1. KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
  2. KUBE_PROXY_MODE="ipvs"

说明

  • k8s也使用systemd来代替cgroupfs
  • k8s默认代理模式是iptables ,现改为ipvs,据说ipvs性能更高;也可以不设置

4、启动kubelet

  1. # 开机自动启动
  2. systemctl enable --now kubelet

四、使用kubeadm引导集群

理论上以下操作只需要在master节点进行

1、下载各个机器需要的集群

初始化主节点之前,需要先下载一些组件,但是这些组件都在官网上,所以我们国内是无法下载的,所以一我们将官网的镜像改成阿里云的镜像,以下命令执行后会在当前目录得到一个 images.sh 的脚本文件,执行后会在docker中下载一些需要的组件,==理论上这些只需要在主节点上执行,但是为了保险起见,我们在所有节点都安装一下==

  1. sudo tee ./images.sh <<-'EOF'
  2. #!/bin/bash
  3. images=(
  4. kube-apiserver:v1.20.9
  5. kube-proxy:v1.20.9
  6. kube-controller-manager:v1.20.9
  7. kube-scheduler:v1.20.9
  8. coredns:1.7.0
  9. etcd:3.4.13-0
  10. pause:3.2
  11. )
  12. for imageName in ${images[@]} ; do
  13. docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
  14. done
  15. EOF

设置可执行权限并执行脚本

  1. chmod +x ./images.sh && ./images.sh

五、主节点环境准备

注意:以下的配置和安装是主节点才有的东西,所以只在master节点上进行,不要在work节点上操作;

1、初始化主节点

使用init组件快速初始化一个主节点

  1. kubeadm init \
  2. --apiserver-advertise-address=192.168.253.131 \
  3. --control-plane-endpoint=cluster-endpoint \
  4. --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
  5. --kubernetes-version v1.20.9 \
  6. --service-cidr=10.96.0.0/16 \
  7. --pod-network-cidr=192.168.0.0/16

说明

  • --apiserver-advertise-address :api-server分发地址,这里一定要写主节点的ip地址
  • --control-plane-endpoint:控制端域名,这里写刚刚在hosts文件配置的主节点域名
  • --image-repository :镜像仓库,使用国内镜像会快一些
  • --kubernetes-version :版本号,跟着k8s的版本走就行,也可以自定义
  • --service-cidr :service网络范围
  • --pod-network : pod网络范围
  • --ignore-preflight-errors=all :忽略检查错误,Kubernetes对GPU要求至少是2核,2G内存。如果你是只有1核的cpu或者只有1G内存的话就无法继续安装,all表示忽略所有检查;这样就算你的单核cpu也可以安装k8s,因为博主的云服务器是单核2G的,所以需要加上这个配置

安装过程需等待几分钟, 如显示以下信息则表示安装成功,但是这还没完,如果要使用k8s集群,还需要按照下面的步骤继续执行命令;

  1. Your Kubernetes control-plane has initialized successfully!
  2. To start using your cluster, you need to run the following as a regular user:
  3. mkdir -p $HOME/.kube
  4. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  5. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  6. Alternatively, if you are the root user, you can run:
  7. export KUBECONFIG=/etc/kubernetes/admin.conf
  8. You should now deploy a pod network to the cluster.
  9. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  10. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  11. You can now join any number of control-plane nodes by copying certificate authorities
  12. and service account keys on each node and then running the following as root:
  13. kubeadm join cluster-endpoint:6443 --token ppwpeo.286k19gvjdlelen8 \
  14. --discovery-token-ca-cert-hash sha256:1e402bf817b1f8f2ade7aeb0a702c389903a96e72724517409793e7b4904ee72 \
  15. --control-plane
  16. Then you can join any number of worker nodes by running the following on each as root:
  17. kubeadm join cluster-endpoint:6443 --token ppwpeo.286k19gvjdlelen8 \
  18. --discovery-token-ca-cert-hash sha256:1e402bf817b1f8f2ade7aeb0a702c389903a96e72724517409793e7b4904ee72

2、初始化后续

根据上面的提示继续进行配置,如果需要使用集群,还需要执行以下命令,
==记住,这里的命令是从第一步初始化成功后拷贝过来的命令,应该拷贝你的命令来执行==

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

3、work节点加入集群(安装令牌,==只在工作节点执行==)

如果要在集群中加入工作节点,那么需要在工作节点执行以下命令,这些是集群的令牌
==记住,这里的命令是从第一步初始化成功后拷贝过来的命令,应该拷贝你的命令来执行==

以下操作只在work节点执行即可

  1. kubeadm join cluster-endpoint:6443 --token ppwpeo.286k19gvjdlelen8 \
  2. --discovery-token-ca-cert-hash sha256:1e402bf817b1f8f2ade7aeb0a702c389903a96e72724517409793e7b4904ee72

加入后我们在主节点执行kubectl get nodes命令,可以看到,除了主节点之外,还有2个工作节点,在看它们的状态都是NotReady(未准备好)的,因为还没安装网络插件,这刚好是我们下一步要做的事;

  1. [root@master ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master NotReady control-plane,master 4h5m v1.20.9
  4. node1 NotReady <none> 98m v1.20.9
  5. node2 NotReady <none> 7s v1.20.9

需要注意的是,这个令牌是24小时内有效,超过24小时就要重新生成了;在master节点执行以下命令即可创建一个新的令牌

  1. # 这个命令一定要在主节点执行才能生成令牌
  2. kubeadm token create --print-join-command

4、安装pod网络插件(==只在主节点执行==)

以下操作只在master节点执行即可

网络插件有好几种,可以翻墙的童鞋可以通过官网下载配置进行安装:https://kubernetes.io/docs/concepts/cluster-administration/addons/

在这里我们使用calico插件进行安装

  1. # 下载 calico-etcd.yaml 到当前目录
  2. wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml
  3. #因为是国外网站,考虑到有些童鞋无法访问,现将calico.yaml的文件内容放在结尾处,需要的童鞋自行复制到文件即可
  4. # 加载网络插件
  5. kubectl apply -f calico-etcd.yaml

安装成功后,会显示以下信息表示安装完成

  1. [root@master ~]# kubectl apply -f calico-etcd.yaml
  2. secret/calico-etcd-secrets configured
  3. configmap/calico-config configured
  4. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  5. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  6. clusterrole.rbac.authorization.k8s.io/calico-node created
  7. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  8. daemonset.apps/calico-node created
  9. serviceaccount/calico-node unchanged
  10. deployment.apps/calico-kube-controllers created
  11. serviceaccount/calico-kube-controllers unchanged
  12. poddisruptionbudget.policy/calico-kube-controllers created

再过几分钟,就可以看到所有的节点都已经ready了

  1. [root@master ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master Ready control-plane,master 5h5m v1.20.9
  4. node1 Ready <none> 158m v1.20.9
  5. node2 Ready <none> 59m v1.20.9

六、自运行演示

k8s集群里面最厉害的一点就是自运行,也就是说当集群中的机器因为某些原因宕机了,重启之后它会自动运行起来,保证高可用,下面我们就来测试一下, 输入以下命令重启三台服务器

  1. reboot

过了几分钟后,我们发现k8s机器已经恢复好了,都是已就绪(Ready)状态,这就是自运行

  1. [root@master ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. master Ready control-plane,master 15h v1.20.9
  4. node1 Ready <none> 13h v1.20.9
  5. node2 Ready <none> 11h v1.20.9

七、创建一个nginx

执行以下命令即可(在master节点执行)

  1. # 部署nginx,创建名为nginx的pob控制器
  2. kubectl create deployment nginx --image=nginx:1.14-alpine
  3. # 暴露端口
  4. kubectl expose deployment nginx --port=80 --type=NodePort

执行后发现,容器正在创建中,ContainerCreating表示容器正在创建中

  1. ^C[root@master ~]# kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. nginx-65c4bffcb6-q64cf 0/1 ContainerCreating 0 2m47s

在等个几分钟就创建好了,Running表示容器正在运行中

  1. [root@master ~]# kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. nginx-65c4bffcb6-q64cf 1/1 Running 0 7h44m

接下来看看映射的端口是哪一个,由下面的命令结果可以看到,端口号为30115

  1. [root@master ~]# kubectl get service
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27h
  4. nginx NodePort 10.96.149.130 <none> 80:30115/TCP 7h43m

因为我们的集群有三个,所以我们访问这三个ip + 端口都是可以访问nginx的

  1. # 以下三个链接都能访问nignx
  2. http://192.168.253.131:30115/
  3. http://192.168.253.132:30115/
  4. http://192.168.253.133:30115/

既然已经部署好了,又能访问,那么这个nginx部署到哪个节点上了呢?有2种方法可以看到

第一种、可以用describe指令看到

  1. kubectl describe pod nginx-65c4bffcb6-q64cf
  2. # 在最尾部可以看到`Events`的事件信息,带`#`号是博主加上去的注释内容
  3. Events:
  4. Type Reason Age From Message
  5. ---- ------ ---- ---- -------
  6. # 拉取nginx镜像
  7. Normal Pulling 33s kubelet Pulling image "nginx:1.14-alpine"
  8. # 已成功将 default/nginx-65c4bffcb6-q64cf 分配给节点node2 ,default是namespace,nginx-65c4bffcb6-q64cf 是pod
  9. Normal Scheduled 51s default-scheduler Successfully assigned default/nginx-65c4bffcb6-q64cf to node2
  10. # nginx镜像拉取成功,拉取时长:25.96447344秒
  11. Normal Pulled 2s kubelet Successfully pulled image "nginx:1.14-alpine" in 25.96447344s
  12. # 创建nginx的容器
  13. Normal Created 50s kubelet Created container nginx
  14. # 运行nginx容器
  15. Normal Started 50s kubelet Started container nginx

第二种、通过-o wide指令查看

  1. kubectl get pod -o wide -n default

执行后结果如下,可以看到NODE那一栏就是节点的名称了

  1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  2. nginx-65c4bffcb6-fdjhv 1/1 Running 0 4h53m 192.168.104.8 node2 <none> <none>

整个安装过程比较简单,但是一步都不能错,只要前面错一步就会导致k8s集群运行过程中会出现各种各样的问题!

八、calico.yaml

此内容根据链接下载得来:wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml;

  1. ---
  2. # Source: calico/templates/calico-config.yaml
  3. # This ConfigMap is used to configure a self-hosted Calico installation.
  4. kind: ConfigMap
  5. apiVersion: v1
  6. metadata:
  7. name: calico-config
  8. namespace: kube-system
  9. data:
  10. # Typha is disabled.
  11. typha_service_name: "none"
  12. # Configure the backend to use.
  13. calico_backend: "bird"
  14. # Configure the MTU to use
  15. veth_mtu: "1440"
  16. # The CNI network configuration to install on each node. The special
  17. # values in this config will be automatically populated.
  18. cni_network_config: |-
  19. {
  20. "name": "k8s-pod-network",
  21. "cniVersion": "0.3.1",
  22. "plugins": [
  23. {
  24. "type": "calico",
  25. "log_level": "info",
  26. "datastore_type": "kubernetes",
  27. "nodename": "__KUBERNETES_NODE_NAME__",
  28. "mtu": __CNI_MTU__,
  29. "ipam": {
  30. "type": "calico-ipam"
  31. },
  32. "policy": {
  33. "type": "k8s"
  34. },
  35. "kubernetes": {
  36. "kubeconfig": "__KUBECONFIG_FILEPATH__"
  37. }
  38. },
  39. {
  40. "type": "portmap",
  41. "snat": true,
  42. "capabilities": {"portMappings": true}
  43. }
  44. ]
  45. }
  46. ---
  47. # Source: calico/templates/kdd-crds.yaml
  48. apiVersion: apiextensions.k8s.io/v1beta1
  49. kind: CustomResourceDefinition
  50. metadata:
  51. name: felixconfigurations.crd.projectcalico.org
  52. spec:
  53. scope: Cluster
  54. group: crd.projectcalico.org
  55. version: v1
  56. names:
  57. kind: FelixConfiguration
  58. plural: felixconfigurations
  59. singular: felixconfiguration
  60. ---
  61. apiVersion: apiextensions.k8s.io/v1beta1
  62. kind: CustomResourceDefinition
  63. metadata:
  64. name: ipamblocks.crd.projectcalico.org
  65. spec:
  66. scope: Cluster
  67. group: crd.projectcalico.org
  68. version: v1
  69. names:
  70. kind: IPAMBlock
  71. plural: ipamblocks
  72. singular: ipamblock
  73. ---
  74. apiVersion: apiextensions.k8s.io/v1beta1
  75. kind: CustomResourceDefinition
  76. metadata:
  77. name: blockaffinities.crd.projectcalico.org
  78. spec:
  79. scope: Cluster
  80. group: crd.projectcalico.org
  81. version: v1
  82. names:
  83. kind: BlockAffinity
  84. plural: blockaffinities
  85. singular: blockaffinity
  86. ---
  87. apiVersion: apiextensions.k8s.io/v1beta1
  88. kind: CustomResourceDefinition
  89. metadata:
  90. name: ipamhandles.crd.projectcalico.org
  91. spec:
  92. scope: Cluster
  93. group: crd.projectcalico.org
  94. version: v1
  95. names:
  96. kind: IPAMHandle
  97. plural: ipamhandles
  98. singular: ipamhandle
  99. ---
  100. apiVersion: apiextensions.k8s.io/v1beta1
  101. kind: CustomResourceDefinition
  102. metadata:
  103. name: ipamconfigs.crd.projectcalico.org
  104. spec:
  105. scope: Cluster
  106. group: crd.projectcalico.org
  107. version: v1
  108. names:
  109. kind: IPAMConfig
  110. plural: ipamconfigs
  111. singular: ipamconfig
  112. ---
  113. apiVersion: apiextensions.k8s.io/v1beta1
  114. kind: CustomResourceDefinition
  115. metadata:
  116. name: bgppeers.crd.projectcalico.org
  117. spec:
  118. scope: Cluster
  119. group: crd.projectcalico.org
  120. version: v1
  121. names:
  122. kind: BGPPeer
  123. plural: bgppeers
  124. singular: bgppeer
  125. ---
  126. apiVersion: apiextensions.k8s.io/v1beta1
  127. kind: CustomResourceDefinition
  128. metadata:
  129. name: bgpconfigurations.crd.projectcalico.org
  130. spec:
  131. scope: Cluster
  132. group: crd.projectcalico.org
  133. version: v1
  134. names:
  135. kind: BGPConfiguration
  136. plural: bgpconfigurations
  137. singular: bgpconfiguration
  138. ---
  139. apiVersion: apiextensions.k8s.io/v1beta1
  140. kind: CustomResourceDefinition
  141. metadata:
  142. name: ippools.crd.projectcalico.org
  143. spec:
  144. scope: Cluster
  145. group: crd.projectcalico.org
  146. version: v1
  147. names:
  148. kind: IPPool
  149. plural: ippools
  150. singular: ippool
  151. ---
  152. apiVersion: apiextensions.k8s.io/v1beta1
  153. kind: CustomResourceDefinition
  154. metadata:
  155. name: hostendpoints.crd.projectcalico.org
  156. spec:
  157. scope: Cluster
  158. group: crd.projectcalico.org
  159. version: v1
  160. names:
  161. kind: HostEndpoint
  162. plural: hostendpoints
  163. singular: hostendpoint
  164. ---
  165. apiVersion: apiextensions.k8s.io/v1beta1
  166. kind: CustomResourceDefinition
  167. metadata:
  168. name: clusterinformations.crd.projectcalico.org
  169. spec:
  170. scope: Cluster
  171. group: crd.projectcalico.org
  172. version: v1
  173. names:
  174. kind: ClusterInformation
  175. plural: clusterinformations
  176. singular: clusterinformation
  177. ---
  178. apiVersion: apiextensions.k8s.io/v1beta1
  179. kind: CustomResourceDefinition
  180. metadata:
  181. name: globalnetworkpolicies.crd.projectcalico.org
  182. spec:
  183. scope: Cluster
  184. group: crd.projectcalico.org
  185. version: v1
  186. names:
  187. kind: GlobalNetworkPolicy
  188. plural: globalnetworkpolicies
  189. singular: globalnetworkpolicy
  190. ---
  191. apiVersion: apiextensions.k8s.io/v1beta1
  192. kind: CustomResourceDefinition
  193. metadata:
  194. name: globalnetworksets.crd.projectcalico.org
  195. spec:
  196. scope: Cluster
  197. group: crd.projectcalico.org
  198. version: v1
  199. names:
  200. kind: GlobalNetworkSet
  201. plural: globalnetworksets
  202. singular: globalnetworkset
  203. ---
  204. apiVersion: apiextensions.k8s.io/v1beta1
  205. kind: CustomResourceDefinition
  206. metadata:
  207. name: networkpolicies.crd.projectcalico.org
  208. spec:
  209. scope: Namespaced
  210. group: crd.projectcalico.org
  211. version: v1
  212. names:
  213. kind: NetworkPolicy
  214. plural: networkpolicies
  215. singular: networkpolicy
  216. ---
  217. apiVersion: apiextensions.k8s.io/v1beta1
  218. kind: CustomResourceDefinition
  219. metadata:
  220. name: networksets.crd.projectcalico.org
  221. spec:
  222. scope: Namespaced
  223. group: crd.projectcalico.org
  224. version: v1
  225. names:
  226. kind: NetworkSet
  227. plural: networksets
  228. singular: networkset
  229. ---
  230. # Source: calico/templates/rbac.yaml
  231. # Include a clusterrole for the kube-controllers component,
  232. # and bind it to the calico-kube-controllers serviceaccount.
  233. kind: ClusterRole
  234. apiVersion: rbac.authorization.k8s.io/v1
  235. metadata:
  236. name: calico-kube-controllers
  237. rules:
  238. # Nodes are watched to monitor for deletions.
  239. - apiGroups: [""]
  240. resources:
  241. - nodes
  242. verbs:
  243. - watch
  244. - list
  245. - get
  246. # Pods are queried to check for existence.
  247. - apiGroups: [""]
  248. resources:
  249. - pods
  250. verbs:
  251. - get
  252. # IPAM resources are manipulated when nodes are deleted.
  253. - apiGroups: ["crd.projectcalico.org"]
  254. resources:
  255. - ippools
  256. verbs:
  257. - list
  258. - apiGroups: ["crd.projectcalico.org"]
  259. resources:
  260. - blockaffinities
  261. - ipamblocks
  262. - ipamhandles
  263. verbs:
  264. - get
  265. - list
  266. - create
  267. - update
  268. - delete
  269. # Needs access to update clusterinformations.
  270. - apiGroups: ["crd.projectcalico.org"]
  271. resources:
  272. - clusterinformations
  273. verbs:
  274. - get
  275. - create
  276. - update
  277. ---
  278. kind: ClusterRoleBinding
  279. apiVersion: rbac.authorization.k8s.io/v1
  280. metadata:
  281. name: calico-kube-controllers
  282. roleRef:
  283. apiGroup: rbac.authorization.k8s.io
  284. kind: ClusterRole
  285. name: calico-kube-controllers
  286. subjects:
  287. - kind: ServiceAccount
  288. name: calico-kube-controllers
  289. namespace: kube-system
  290. ---
  291. # Include a clusterrole for the calico-node DaemonSet,
  292. # and bind it to the calico-node serviceaccount.
  293. kind: ClusterRole
  294. apiVersion: rbac.authorization.k8s.io/v1
  295. metadata:
  296. name: calico-node
  297. rules:
  298. # The CNI plugin needs to get pods, nodes, and namespaces.
  299. - apiGroups: [""]
  300. resources:
  301. - pods
  302. - nodes
  303. - namespaces
  304. verbs:
  305. - get
  306. - apiGroups: [""]
  307. resources:
  308. - endpoints
  309. - services
  310. verbs:
  311. # Used to discover service IPs for advertisement.
  312. - watch
  313. - list
  314. # Used to discover Typhas.
  315. - get
  316. - apiGroups: [""]
  317. resources:
  318. - nodes/status
  319. verbs:
  320. # Needed for clearing NodeNetworkUnavailable flag.
  321. - patch
  322. # Calico stores some configuration information in node annotations.
  323. - update
  324. # Watch for changes to Kubernetes NetworkPolicies.
  325. - apiGroups: ["networking.k8s.io"]
  326. resources:
  327. - networkpolicies
  328. verbs:
  329. - watch
  330. - list
  331. # Used by Calico for policy information.
  332. - apiGroups: [""]
  333. resources:
  334. - pods
  335. - namespaces
  336. - serviceaccounts
  337. verbs:
  338. - list
  339. - watch
  340. # The CNI plugin patches pods/status.
  341. - apiGroups: [""]
  342. resources:
  343. - pods/status
  344. verbs:
  345. - patch
  346. # Calico monitors various CRDs for config.
  347. - apiGroups: ["crd.projectcalico.org"]
  348. resources:
  349. - globalfelixconfigs
  350. - felixconfigurations
  351. - bgppeers
  352. - globalbgpconfigs
  353. - bgpconfigurations
  354. - ippools
  355. - ipamblocks
  356. - globalnetworkpolicies
  357. - globalnetworksets
  358. - networkpolicies
  359. - networksets
  360. - clusterinformations
  361. - hostendpoints
  362. - blockaffinities
  363. verbs:
  364. - get
  365. - list
  366. - watch
  367. # Calico must create and update some CRDs on startup.
  368. - apiGroups: ["crd.projectcalico.org"]
  369. resources:
  370. - ippools
  371. - felixconfigurations
  372. - clusterinformations
  373. verbs:
  374. - create
  375. - update
  376. # Calico stores some configuration information on the node.
  377. - apiGroups: [""]
  378. resources:
  379. - nodes
  380. verbs:
  381. - get
  382. - list
  383. - watch
  384. # These permissions are only requried for upgrade from v2.6, and can
  385. # be removed after upgrade or on fresh installations.
  386. - apiGroups: ["crd.projectcalico.org"]
  387. resources:
  388. - bgpconfigurations
  389. - bgppeers
  390. verbs:
  391. - create
  392. - update
  393. # These permissions are required for Calico CNI to perform IPAM allocations.
  394. - apiGroups: ["crd.projectcalico.org"]
  395. resources:
  396. - blockaffinities
  397. - ipamblocks
  398. - ipamhandles
  399. verbs:
  400. - get
  401. - list
  402. - create
  403. - update
  404. - delete
  405. - apiGroups: ["crd.projectcalico.org"]
  406. resources:
  407. - ipamconfigs
  408. verbs:
  409. - get
  410. # Block affinities must also be watchable by confd for route aggregation.
  411. - apiGroups: ["crd.projectcalico.org"]
  412. resources:
  413. - blockaffinities
  414. verbs:
  415. - watch
  416. # The Calico IPAM migration needs to get daemonsets. These permissions can be
  417. # removed if not upgrading from an installation using host-local IPAM.
  418. - apiGroups: ["apps"]
  419. resources:
  420. - daemonsets
  421. verbs:
  422. - get
  423. ---
  424. apiVersion: rbac.authorization.k8s.io/v1
  425. kind: ClusterRoleBinding
  426. metadata:
  427. name: calico-node
  428. roleRef:
  429. apiGroup: rbac.authorization.k8s.io
  430. kind: ClusterRole
  431. name: calico-node
  432. subjects:
  433. - kind: ServiceAccount
  434. name: calico-node
  435. namespace: kube-system
  436. ---
  437. # Source: calico/templates/calico-node.yaml
  438. # This manifest installs the calico-node container, as well
  439. # as the CNI plugins and network config on
  440. # each master and worker node in a Kubernetes cluster.
  441. kind: DaemonSet
  442. apiVersion: apps/v1
  443. metadata:
  444. name: calico-node
  445. namespace: kube-system
  446. labels:
  447. k8s-app: calico-node
  448. spec:
  449. selector:
  450. matchLabels:
  451. k8s-app: calico-node
  452. updateStrategy:
  453. type: RollingUpdate
  454. rollingUpdate:
  455. maxUnavailable: 1
  456. template:
  457. metadata:
  458. labels:
  459. k8s-app: calico-node
  460. annotations:
  461. # This, along with the CriticalAddonsOnly toleration below,
  462. # marks the pod as a critical add-on, ensuring it gets
  463. # priority scheduling and that its resources are reserved
  464. # if it ever gets evicted.
  465. scheduler.alpha.kubernetes.io/critical-pod: ''
  466. spec:
  467. nodeSelector:
  468. beta.kubernetes.io/os: linux
  469. hostNetwork: true
  470. tolerations:
  471. # Make sure calico-node gets scheduled on all nodes.
  472. - effect: NoSchedule
  473. operator: Exists
  474. # Mark the pod as a critical add-on for rescheduling.
  475. - key: CriticalAddonsOnly
  476. operator: Exists
  477. - effect: NoExecute
  478. operator: Exists
  479. serviceAccountName: calico-node
  480. # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
  481. # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
  482. terminationGracePeriodSeconds: 0
  483. priorityClassName: system-node-critical
  484. initContainers:
  485. # This container performs upgrade from host-local IPAM to calico-ipam.
  486. # It can be deleted if this is a fresh installation, or if you have already
  487. # upgraded to use calico-ipam.
  488. - name: upgrade-ipam
  489. image: calico/cni:v3.10.4
  490. command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
  491. env:
  492. - name: KUBERNETES_NODE_NAME
  493. valueFrom:
  494. fieldRef:
  495. fieldPath: spec.nodeName
  496. - name: CALICO_NETWORKING_BACKEND
  497. valueFrom:
  498. configMapKeyRef:
  499. name: calico-config
  500. key: calico_backend
  501. volumeMounts:
  502. - mountPath: /var/lib/cni/networks
  503. name: host-local-net-dir
  504. - mountPath: /host/opt/cni/bin
  505. name: cni-bin-dir
  506. # This container installs the CNI binaries
  507. # and CNI network config file on each node.
  508. - name: install-cni
  509. image: calico/cni:v3.10.4
  510. command: ["/install-cni.sh"]
  511. env:
  512. # Name of the CNI config file to create.
  513. - name: CNI_CONF_NAME
  514. value: "10-calico.conflist"
  515. # The CNI network config to install on each node.
  516. - name: CNI_NETWORK_CONFIG
  517. valueFrom:
  518. configMapKeyRef:
  519. name: calico-config
  520. key: cni_network_config
  521. # Set the hostname based on the k8s node name.
  522. - name: KUBERNETES_NODE_NAME
  523. valueFrom:
  524. fieldRef:
  525. fieldPath: spec.nodeName
  526. # CNI MTU Config variable
  527. - name: CNI_MTU
  528. valueFrom:
  529. configMapKeyRef:
  530. name: calico-config
  531. key: veth_mtu
  532. # Prevents the container from sleeping forever.
  533. - name: SLEEP
  534. value: "false"
  535. volumeMounts:
  536. - mountPath: /host/opt/cni/bin
  537. name: cni-bin-dir
  538. - mountPath: /host/etc/cni/net.d
  539. name: cni-net-dir
  540. # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
  541. # to communicate with Felix over the Policy Sync API.
  542. - name: flexvol-driver
  543. image: calico/pod2daemon-flexvol:v3.10.4
  544. volumeMounts:
  545. - name: flexvol-driver-host
  546. mountPath: /host/driver
  547. containers:
  548. # Runs calico-node container on each Kubernetes node. This
  549. # container programs network policy and routes on each
  550. # host.
  551. - name: calico-node
  552. image: calico/node:v3.10.4
  553. env:
  554. # Use Kubernetes API as the backing datastore.
  555. - name: DATASTORE_TYPE
  556. value: "kubernetes"
  557. # Wait for the datastore.
  558. - name: WAIT_FOR_DATASTORE
  559. value: "true"
  560. # Set based on the k8s node name.
  561. - name: NODENAME
  562. valueFrom:
  563. fieldRef:
  564. fieldPath: spec.nodeName
  565. # Choose the backend to use.
  566. - name: CALICO_NETWORKING_BACKEND
  567. valueFrom:
  568. configMapKeyRef:
  569. name: calico-config
  570. key: calico_backend
  571. # Cluster type to identify the deployment type
  572. - name: CLUSTER_TYPE
  573. value: "k8s,bgp"
  574. # Auto-detect the BGP IP address.
  575. - name: IP
  576. value: "autodetect"
  577. # Enable IPIP
  578. - name: CALICO_IPV4POOL_IPIP
  579. value: "Always"
  580. # Set MTU for tunnel device used if ipip is enabled
  581. - name: FELIX_IPINIPMTU
  582. valueFrom:
  583. configMapKeyRef:
  584. name: calico-config
  585. key: veth_mtu
  586. # The default IPv4 pool to create on startup if none exists. Pod IPs will be
  587. # chosen from this range. Changing this value after installation will have
  588. # no effect. This should fall within `--cluster-cidr`.
  589. - name: CALICO_IPV4POOL_CIDR
  590. value: "192.168.0.0/16"
  591. # Disable file logging so `kubectl logs` works.
  592. - name: CALICO_DISABLE_FILE_LOGGING
  593. value: "true"
  594. # Set Felix endpoint to host default action to ACCEPT.
  595. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
  596. value: "ACCEPT"
  597. # Disable IPv6 on Kubernetes.
  598. - name: FELIX_IPV6SUPPORT
  599. value: "false"
  600. # Set Felix logging to "info"
  601. - name: FELIX_LOGSEVERITYSCREEN
  602. value: "info"
  603. - name: FELIX_HEALTHENABLED
  604. value: "true"
  605. securityContext:
  606. privileged: true
  607. resources:
  608. requests:
  609. cpu: 250m
  610. livenessProbe:
  611. exec:
  612. command:
  613. - /bin/calico-node
  614. - -felix-live
  615. - -bird-live
  616. periodSeconds: 10
  617. initialDelaySeconds: 10
  618. failureThreshold: 6
  619. readinessProbe:
  620. exec:
  621. command:
  622. - /bin/calico-node
  623. - -felix-ready
  624. - -bird-ready
  625. periodSeconds: 10
  626. volumeMounts:
  627. - mountPath: /lib/modules
  628. name: lib-modules
  629. readOnly: true
  630. - mountPath: /run/xtables.lock
  631. name: xtables-lock
  632. readOnly: false
  633. - mountPath: /var/run/calico
  634. name: var-run-calico
  635. readOnly: false
  636. - mountPath: /var/lib/calico
  637. name: var-lib-calico
  638. readOnly: false
  639. - name: policysync
  640. mountPath: /var/run/nodeagent
  641. volumes:
  642. # Used by calico-node.
  643. - name: lib-modules
  644. hostPath:
  645. path: /lib/modules
  646. - name: var-run-calico
  647. hostPath:
  648. path: /var/run/calico
  649. - name: var-lib-calico
  650. hostPath:
  651. path: /var/lib/calico
  652. - name: xtables-lock
  653. hostPath:
  654. path: /run/xtables.lock
  655. type: FileOrCreate
  656. # Used to install CNI.
  657. - name: cni-bin-dir
  658. hostPath:
  659. path: /opt/cni/bin
  660. - name: cni-net-dir
  661. hostPath:
  662. path: /etc/cni/net.d
  663. # Mount in the directory for host-local IPAM allocations. This is
  664. # used when upgrading from host-local to calico-ipam, and can be removed
  665. # if not using the upgrade-ipam init container.
  666. - name: host-local-net-dir
  667. hostPath:
  668. path: /var/lib/cni/networks
  669. # Used to create per-pod Unix Domain Sockets
  670. - name: policysync
  671. hostPath:
  672. type: DirectoryOrCreate
  673. path: /var/run/nodeagent
  674. # Used to install Flex Volume Driver
  675. - name: flexvol-driver-host
  676. hostPath:
  677. type: DirectoryOrCreate
  678. path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
  679. ---
  680. apiVersion: v1
  681. kind: ServiceAccount
  682. metadata:
  683. name: calico-node
  684. namespace: kube-system
  685. ---
  686. # Source: calico/templates/calico-kube-controllers.yaml
  687. # See https://github.com/projectcalico/kube-controllers
  688. apiVersion: apps/v1
  689. kind: Deployment
  690. metadata:
  691. name: calico-kube-controllers
  692. namespace: kube-system
  693. labels:
  694. k8s-app: calico-kube-controllers
  695. spec:
  696. # The controllers can only have a single active instance.
  697. replicas: 1
  698. selector:
  699. matchLabels:
  700. k8s-app: calico-kube-controllers
  701. strategy:
  702. type: Recreate
  703. template:
  704. metadata:
  705. name: calico-kube-controllers
  706. namespace: kube-system
  707. labels:
  708. k8s-app: calico-kube-controllers
  709. annotations:
  710. scheduler.alpha.kubernetes.io/critical-pod: ''
  711. spec:
  712. nodeSelector:
  713. beta.kubernetes.io/os: linux
  714. tolerations:
  715. # Mark the pod as a critical add-on for rescheduling.
  716. - key: CriticalAddonsOnly
  717. operator: Exists
  718. - key: node-role.kubernetes.io/master
  719. effect: NoSchedule
  720. serviceAccountName: calico-kube-controllers
  721. priorityClassName: system-cluster-critical
  722. containers:
  723. - name: calico-kube-controllers
  724. image: calico/kube-controllers:v3.10.4
  725. env:
  726. # Choose which controllers to run.
  727. - name: ENABLED_CONTROLLERS
  728. value: node
  729. - name: DATASTORE_TYPE
  730. value: kubernetes
  731. readinessProbe:
  732. exec:
  733. command:
  734. - /usr/bin/check-status
  735. - -r
  736. ---
  737. apiVersion: v1
  738. kind: ServiceAccount
  739. metadata:
  740. name: calico-kube-controllers
  741. namespace: kube-system
  742. ---
  743. # Source: calico/templates/calico-etcd-secrets.yaml
  744. ---
  745. # Source: calico/templates/calico-typha.yaml
  746. ---
  747. # Source: calico/templates/configure-canal.yaml

关键字Kubernetes

上一篇: k8s常用命令

下一篇: Kubernetes数据卷存储