我们玩单机的容器,如果需要持久化的话,需要将容器目录映射到主机,但是在 K8S 环境下容器是可能会被调度到任意节点的,所以需要使用远程服务存储数据。在云平台上都会提供自己的云盘存储,但是自己搭建的 Kubernetes 没有办法使用云盘做存储,所以需要用自己搭建的存储,NFS 是比较常见的一种,其他还有 glusterfs、ceph 等。
关于 NFS 搭建教程,可以查看https://zahui.fan/posts/4b677f68/  容器镜像开源地址https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner 
 
安装 NFS client 工具 所有的 worker 节点上都需要安装 NFS client
Ubuntu
  1 sudo apt install nfs-common -y 
 
 
CentOS
  1 sudo yum install nfs-utils -y 
 
 
rbac 授权 rbac.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 apiVersion:  v1 kind:  ServiceAccount metadata:   name:  nfs-client-provisioner       namespace:  default  --- kind:  ClusterRole apiVersion:  rbac.authorization.k8s.io/v1 metadata:   name:  nfs-client-provisioner-runner  rules:   -  apiGroups:  ["" ]     resources:  ["persistentvolumes" ]     verbs:  ["get" , "list" , "watch" , "create" , "delete" ]   -  apiGroups:  ["" ]     resources:  ["persistentvolumeclaims" ]     verbs:  ["get" , "list" , "watch" , "update" ]   -  apiGroups:  ["storage.k8s.io" ]     resources:  ["storageclasses" ]     verbs:  ["get" , "list" , "watch" ]   -  apiGroups:  ["" ]     resources:  ["events" ]     verbs:  ["create" , "update" , "patch" ] --- kind:  ClusterRoleBinding apiVersion:  rbac.authorization.k8s.io/v1 metadata:   name:  run-nfs-client-provisioner  subjects:   -  kind:  ServiceAccount      name:  nfs-client-provisioner           namespace:  default  roleRef:   kind:  ClusterRole    name:  nfs-client-provisioner-runner    apiGroup:  rbac.authorization.k8s.io  --- kind:  Role apiVersion:  rbac.authorization.k8s.io/v1 metadata:   name:  leader-locking-nfs-client-provisioner       namespace:  default  rules:   -  apiGroups:  ["" ]     resources:  ["endpoints" ]     verbs:  ["get" , "list" , "watch" , "create" , "update" , "patch" ] --- kind:  RoleBinding apiVersion:  rbac.authorization.k8s.io/v1 metadata:   name:  leader-locking-nfs-client-provisioner       namespace:  default  subjects:   -  kind:  ServiceAccount      name:  nfs-client-provisioner           namespace:  default  roleRef:   kind:  Role    name:  leader-locking-nfs-client-provisioner    apiGroup:  rbac.authorization.k8s.io  
 
1 kubectl apply -f rbac.yml 
 
1 2 3 4 kubectl get sa NAME                     SECRETS   AGE default                  1         2d nfs-client-provisioner   1         5s 
 
部署插件 deployment.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 apiVersion:  apps/v1 kind:  Deployment metadata:   name:  nfs-client-provisioner    labels:      app:  nfs-client-provisioner       namespace:  default  spec:   replicas:  1    strategy:      type:  Recreate    selector:      matchLabels:        app:  nfs-client-provisioner    template:      metadata:        labels:          app:  nfs-client-provisioner      spec:        serviceAccountName:  nfs-client-provisioner        containers:          -  name:  nfs-client-provisioner            image:  registry.cn-hangzhou.aliyuncs.com/iuxt/nfs-subdir-external-provisioner:v4.0.2            volumeMounts:              -  name:  nfs-client-root                mountPath:  /persistentvolumes            env:              -  name:  PROVISIONER_NAME                               value:  iuxt/nfs              -  name:  NFS_SERVER                               value:  10.0 .0 .3              -  name:  NFS_PATH                               value:  /nfs        volumes:          -  name:  nfs-client-root            nfs:                           server:  10.0 .0 .3                           path:  /nfs  
 
1 kubectl apply -f deployment.yml 
 
1 2 3 kubectl get deploy NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE nfs-client-provisioner                 1/1     1            1           10s 
 
创建 storageclass storageclass.yml
1 2 3 4 5 6 7 8 9 10 11 12 apiVersion:  storage.k8s.io/v1 kind:  StorageClass metadata:   name:  managed-nfs-storage    annotations:      storageclass.kubernetes.io/is-default-class:  "true"  provisioner:  iuxt/nfs mountOptions:   -  vers=4.1  parameters:   archiveOnDelete:  "false"  
 
1 kubectl apply -f storageclass.yml 
 
1 2 3 kubectl get sc NAME                  PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE managed-nfs-storage   iuxt/nfs            Delete          Immediate           false                   15s 
 
测试 NFS nginx.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 apiVersion:  apps/v1 kind:  Deployment metadata:   name:  test-nginx  spec:   replicas:  3    selector:      matchLabels:        app:  test-nginx    template:      metadata:        labels:          app:  test-nginx      spec:        containers:        -  name:  nginx          image:  nginx:alpine          ports:          -  containerPort:  80          volumeMounts:            -  mountPath:  /usr/share/nginx/html              name:  data        volumes:          -  name:  data            persistentVolumeClaim:              claimName:  nfs  --- kind:  PersistentVolumeClaim apiVersion:  v1 metadata:  name:  nfs   annotations:     volume.beta.kubernetes.io/storage-class:  "managed-nfs-storage"  spec:  accessModes:     -  ReadWriteMany   resources:     requests:       storage:  1Gi  --- apiVersion:  v1 kind:  Service metadata:   name:  test-nginx    labels:      app:  test-nginx  spec:   ports:    -  port:  80      protocol:  TCP    selector:      app:  test-nginx  
 
1 kubectl apply -f nginx.yml 
 
然后在 NFS 服务器上可以看到对应的目录,进去创建一个 index.html 文件,然后再请求对应的容器,发现已经可以正常返回了。
常见问题 PVC 一直处于 Pending 状态 1 2 3 4 kubectl describe pvc   Normal  ExternalProvisioning  8s (x2 over 13s)  persistentvolume-controller  waiting for  a volume to be created, either by external provisioner "qgg-nfs-storage"  or manually created by system administrator 
 
如果你是使用的这个镜像 image: quay.io/external_storage/nfs-client-provisioner:latest 这个镜像是老镜像了,1.20 及以上版本的 K8S 会报错,这是因为 Kubernetes v1.20 开始,默认删除了 > metadata.selfLink 字段,然而,部分应用仍然依赖于这个字段,例如 nfs-client-provisioner。如果仍然要继续使用这些应用,您将需要重新启用该字段。 通过配置 apiserver 启动参数中的 --feature-gates 中的 RemoveSelfLink=false,可以重新启用 metadata.selfLink 字段。 具体来说,如果您使用 kubeadm 安装 Kubernetes,请修改 /etc/kubernetes/manifests/kube-apiserver.yaml 文件,并在其启动参数中增加一行 - --feature-gates=RemoveSelfLink=false
 
在所有的 Master 节点上修改配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 apiVersion:  v1 kind:  Pod metadata:   annotations:      kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint:  10.0 .0 .11 :6443    creationTimestamp:  null    labels:      component:  kube-apiserver      tier:  control-plane    name:  kube-apiserver    namespace:  kube-system  spec:   containers:    -  command:      -  kube-apiserver      -  --advertise-address=10.0.0.11      -  --allow-privileged=true      -  --authorization-mode=Node,RBAC      -  --client-ca-file=/etc/kubernetes/pki/ca.crt      -  --enable-admission-plugins=NodeRestriction      -  --enable-bootstrap-token-auth=true      -  --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt      -  --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt      -  --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key      -  --etcd-servers=https://127.0.0.1:2379      -  --insecure-port=0      -  --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt      -  --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key      -  --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname      -  --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt      -  --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key      -  --requestheader-allowed-names=front-proxy-client      -  --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt      -  --requestheader-extra-headers-prefix=X-Remote-Extra-      -  --requestheader-group-headers=X-Remote-Group      -  --requestheader-username-headers=X-Remote-User      -  --secure-port=6443      -  --service-account-issuer=https://kubernetes.default.svc.cluster.local      -  --service-account-key-file=/etc/kubernetes/pki/sa.pub      -  --service-account-signing-key-file=/etc/kubernetes/pki/sa.key      -  --service-cluster-ip-range=10.96.0.0/12      -  --tls-cert-file=/etc/kubernetes/pki/apiserver.crt      -  --tls-private-key-file=/etc/kubernetes/pki/apiserver.key      -  --feature-gates=RemoveSelfLink=false      image:  k8s.gcr.io/kube-apiserver:v1.21.10      ...  
 
或者更换镜像为 k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2,国内加速镜像为:registry.cn-hangzhou.aliyuncs.com/iuxt/nfs-subdir-external-provisioner:v4.0.2