Сбой MountVolume.SetUp для ПВХ-тома, монтирование не удалось: состояние выхода 32

Мы пытаемся настроить облачный провайдер vsphere для кластера kubernetes согласно приведенной ниже ссылке поддержки для динамической инициализации от vmware. Не было проблем с версией kubernetres v1.8, проблема возникла после развертывания нового кластера kubernetes версии 1.12. https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/existing.html

Кубернетская версия v1.12.7

Среда Cluster: master1,node1,node2Virtualization: VMware ESXi, 6.5.0, 7388607 , each virtual machine node placed on different esxi host, diskuuid is trueNetwork: Cisco Apic, Version 3.2(4e)

Определение проблемы: Storage class, pv, pvc successfull created and bounded, vmdk disk added to virtual machine successfully but node cannot mount this physical disk to virtual path of the pod, there are two different log not sure which one is right, first log mention _"cloud provider not initialized"_, if not how vmdk created and attached to virtual machine, second log mention _"mount failed because not found the ...vmdk"_ but i can browse from the vcenter mentioned vmdk is in the datastore"

[root@kubemaster ~]# kubectl get pods pvpod
NAME    READY   STATUS              RESTARTS   AGE
pvpod   0/1     ContainerCreating   0          17h

kubectl descrice pod pvpod


[root@kubemaster vcp]#
[root@kubemaster vcp]# kubectl describe pod pvpod
Name:               pvpod
Namespace:          default
Priority:           0
PriorityClassName:  
Node:               kubenode2/10.1.1.12
Start Time:         Tue, 14 May 2019 12:52:18 +0300
Labels:             
Annotations:        opflex.cisco.com/computed-endpoint-group: {"policy-space":"Kubernetes","name":"kubernetes|kube-default"}
                    opflex.cisco.com/computed-security-group: []
Status:             Pending
IP:
Containers:
  test-container:
    Container ID:
    Image:          gcr.io/google_containers/test-webserver
    Image ID:
    Port:           
    Host Port:      
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    
    Mounts:
      /test-vmdk from test-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pvf97 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  test-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvcsc001
    ReadOnly:   false
  default-token-pvf97:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-pvf97
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age    From                           Message
  ----     ------       ----   ----                           -------
  Normal   Scheduled    8m22s  default-scheduler              Successfully assigned default/pvpod to kubenode2
  Warning  FailedMount  8m21s  kubelet, kubenode2  MountVolume.SetUp failed for volume "pvc-eb27986d-762d-11e9-aff9-005056b068e3" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/f63bb16e-762d-11e9-aff9-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-eb27986d-762d-11e9-aff9-005056b068e3 --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-eb27986d-762d-11e9-aff9-005056b068e3.vmdk /var/lib/kubelet/pods/f63bb16e-762d-11e9-aff9-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-eb27986d-762d-11e9-aff9-005056b068e3
Output: Running scope as unit run-87284.scope.
mount: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-eb27986d-762d-11e9-aff9-005056b068e3.vmdk does not exist
  Warning  FailedMount  8m21s  kubelet, kubenode2  MountVolume.SetUp failed for volume "pvc-eb27986d-762d-11e9-aff9-005056b068e3" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/f63bb16e-762d-11e9-aff9-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-eb27986d-762d-11e9-aff9-005056b068e3 --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-eb27986d-762d-11e9-aff9-005056b068e3.vmdk /var/lib/kubelet/pods/f63bb16e-762d-11e9-aff9-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-eb27986d-762d-11e9-aff9-005056b068e3
Output: Running scope as unit run-87288.scope.
mount: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-eb27986d-762d-11e9-aff9-005056b068e3.vmdk does not exist
  Warning  FailedMount  8m20s  kubelet, kubenode2  MountVolume.SetUp failed for volume "pvc-eb27986d-762d-11e9-aff9-005056b068e3" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/f63bb16e-762d-11e9-aff9-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-eb27986d-762d-11e9-aff9-005056b068e3 --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-eb27986d-762d-11e9-aff9-005056b068e3.vmdk /var/lib/kubelet/pods/f63bb16e-762d-11e9-aff9-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-eb27986d-762d-11e9-aff9-005056b068e3
Output: Running scope as unit run-87290.scope.
mount: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-eb27986d-762d-11e9-aff9-005056b068e3.vmdk does not exist
  Warning  FailedMount  8m18s  kubelet, kubenode2  MountVolume.SetUp failed for volume "pvc-eb27986d-762d-11e9-aff9-005056b068e3" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/f63bb16e-762d-11e9-aff9-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-eb27986d-762d-11e9-aff9-005056b068e3 --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-eb27986d-762d-11e9-aff9-005056b068e3.vmdk /var/lib/kubelet/pods/f63bb16e-762d-11e9-aff9-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-eb27986d-762d-11e9-aff9-005056b068e3
Output: Running scope as unit run-87329.scope.
mount: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-eb27986d-762d-11e9-aff9-005056b068e3.vmdk does not exist
  Normal   SuccessfulAttachVolume  8m17s  attachdetach-controller        AttachVolume.Attach succeeded for volume "pvc-eb27986d-762d-11e9-aff9-005056b068e3"
  Warning  FailedMount             8m14s  kubelet, kubenode2  MountVolume.SetUp failed for volume "pvc-eb27986d-762d-11e9-aff9-005056b068e3" : mount failed: exit status 32
  

/ var / log / messages на узле

May  8 16:05:32 kubenode1 kubelet: E0508 16:05:32.730158   53933 vsphere_volume_util.go:187] Cloud provider not initialized properly
May  8 16:05:32 kubenode1 systemd: Started Kubernetes transient mount for /var/lib/kubelet/pods/b52b1433-718a-11e9-a3ef-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-e3705fec-7189-11e9-a3ef-005056b068e3.
May  8 16:05:32 kubenode1 kubelet: E0508 16:05:32.744639   53933 mount_linux.go:152] Mount failed: exit status 32
May  8 16:05:32 kubenode1 kubelet: Mounting command: systemd-run
May  8 16:05:32 kubenode1 kubelet: Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/b52b1433-718a-11e9-a3ef-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-e3705fec-7189-11e9-a3ef-005056b068e3 --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-e3705fec-7189-11e9-a3ef-005056b068e3.vmdk /var/lib/kubelet/pods/b52b1433-718a-11e9-a3ef-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-e3705fec-7189-11e9-a3ef-005056b068e3
May  8 16:05:32 kubenode1 kubelet: Output: Running scope as unit run-57611.scope.
May  8 16:05:32 kubenode1 kubelet: mount: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-e3705fec-7189-11e9-a3ef-005056b068e3.vmdk does not exist
May  8 16:05:32 kubenode1 kubelet: E0508 16:05:32.744865   53933 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/vsphere-volume/b52b1433-718a-11e9-a3ef-005056b068e3-pvc-e3705fec-7189-11e9-a3ef-005056b068e3\" (\"b52b1433-718a-11e9-a3ef-005056b068e3\")" failed. No retries permitted until 2019-05-08 16:07:34.74479901 +0300 +03 m=+619.028655111 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"pvc-e3705fec-7189-11e9-a3ef-005056b068e3\" (UniqueName: \"kubernetes.io/vsphere-volume/b52b1433-718a-11e9-a3ef-005056b068e3-pvc-e3705fec-7189-11e9-a3ef-005056b068e3\") pod \"pvpod\" (UID: \"b52b1433-718a-11e9-a3ef-005056b068e3\") : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/b52b1433-718a-11e9-a3ef-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-e3705fec-7189-11e9-a3ef-005056b068e3 --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-e3705fec-7189-11e9-a3ef-005056b068e3.vmdk /var/lib/kubelet/pods/b52b1433-718a-11e9-a3ef-005056b068e3/volumes/kubernetes.io~vsphere-volume/pvc-e3705fec-7189-11e9-a3ef-005056b068e3\nOutput: Running scope as unit run-57611.scope.\nmount: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-e3705fec-7189-11e9-a3ef-005056b068e3.vmdk does not exist\n\n"
May  8 16:05:32 kubenode1 kubelet: E0508 16:05:32.830478   53933 vsphere_volume_util.go:187] Cloud provider not initialized properly

Pv,PVC,SC yaml файлы

Sc-fast.yaml

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast provisioner: kubernetes.io/vsphere-volume parameters: datastore: KUBEDATASTORE diskformat: thin fstype: ext3

pvc.yaml

kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvcsc001 annotations: volume.beta.kubernetes.io/storage-class: fast spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi

pod.yaml

apiVersion: v1 kind: Pod metadata: name: pvpod spec: containers: - name: test-container image: gcr.io/google_containers/test-webserver volumeMounts: - name: test-volume mountPath: /test-vmdk volumes: - name: test-volume persistentVolumeClaim: claimName: pvcsc001

Вывод примененных файлов yaml

[root@kubemaster vcp]# kubectl create -f vsphere-volume-sc-fast.yaml
storageclass.storage.k8s.io/fast created
[root@kubemaster vcp]#
[root@kubemaster vcp]# kubectl describe storageclass fast
Name:                  fast
IsDefaultClass:        No
Annotations:           
Provisioner:           kubernetes.io/vsphere-volume
Parameters:            datastore=KUBEDATASTORE,diskformat=thin,fstype=ext3
AllowVolumeExpansion:  
MountOptions:          
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                

[root@kubemaster vcp]# kubectl create -f vsphere-volume-pvcsc.yaml persistentvolumeclaim/pvcsc001 created [root@kubemaster vcp]# [root@kubemaster vcp]# kubectl describe pvc pvcsc001 Name: pvcsc001 Namespace: default StorageClass: fast Status: Bound Volume: pvc-4aa82312-72ec-11e9-8b74-005056b068e3 Labels: Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-class: fast volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume Finalizers: [kubernetes.io/pvc-protection] Capacity: 2Gi Access Modes: RWO Events: Type Reason Age From Message Normal ProvisioningSucceeded 13s persistentvolume-controller Successfully provisioned volume pvc-4aa82312-72ec-11e9-8b74-005056b068e3 using kubernetes.io/vsphere-volume Mounted By: <.none>

[root@kubemaster vcp]# [root@kubemaster vcp]# kubectl describe pv pvc-4aa82312-72ec-11e9-8b74-005056b068e3 Name: pvc-4aa82312-72ec-11e9-8b74-005056b068e3 Labels: Annotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner pv.kubernetes.io/bound-by-controller: yes pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume Finalizers: [kubernetes.io/pv-protection] StorageClass: fast Status: Bound Claim: default/pvcsc001 Reclaim Policy: Delete Access Modes: RWO Capacity: 2Gi Node Affinity: Message: Source: Type: vSphereVolume (a Persistent Disk resource in vSphere) VolumePath: [KUBEDATASTORE] kubevols/kubernetes-dynamic-pvc-4aa82312-72ec-11e9-8b74-005056b068e3.vmdk FSType: ext3 StoragePolicyName: Events:

[root@kubemaster vcp]#
[root@kubemaster vcp]# kubectl create -f vsphere-volume-pvcscpod.yaml
pod/pvpod created
[root@kubemaster vcp]#
[root@kubemaster vcp]#  kubectl get pod pvpod
NAME    READY   STATUS              RESTARTS   AGE
pvpod   0/1     ContainerCreating   0          49s

первый прогресс

во-первых, когда я запустил "vsphere-volume-pvcsc.yaml", он не был успешно создан том, вывод сообщения "kubectl description pvc" был не найден VM, чем я применил https://github.com/kubernetes/kubernetes/issues/65933 метод для сопоставления UUID (даже если Provrid и серийный номер продукта был тот же), после чего том успешно создан и подключен к виртуальной машине. Но Pod все еще ContainerCreating государство.

cat / sys / class / dmi / id / product_serial | sed -e 's / ^ VMware - //' -e 's / - / /' | awk '{print toupper ($ 1 $ 2 $ 3 $ 4 "-" $ 5 $ 6 "-" $ 7 $ 8 "-" $ 9 $ 10 "-" $ 11 $ 12 $ 13 $ 14 $ 15 $ 16)}' узел патча kubectl kubemaster -p '{"spec":{"providerID":"VSphere://4230456E-D8B2-Æd1-E270-740256CBD273"}}"

Второй прогресс

Мы обратились к @shahbour по адресу #77663 и убедились, что это не проблема поставщика облачных вычислений, даже сообщение "Поставщик облака не инициализирован должным образом" выводит сообщение на этап при создании контейнера.

1 ответ

Этот комментарий был ошибкой https://github.com/kubernetes/kubernetes/issues/77663, я снова сосредоточился на настройке облачного провайдера в соответствии с комментарием @divyenpatel

Переконфигурирование файлов kubelet и 10-kubeadm.conf, как показано ниже, чем systemctl daemon-reload && systemctl restart kubelet, устранило проблему.

/etc/sysconfig/kubelet , i have added node-ip parameter because uncommenting EXTRA_ARG from 10-kubelet.conf was causing node status to Not Ready

**<on the master node>**
KUBELET_EXTRA_ARGS=--cloud-provider=vsphere --cloud-config=/etc/kubernetes/pki/vsphere.conf --node-ip 10.1.1.10

**<on the worker nodes>**
KUBELET_EXTRA_ARGS=--cloud-provider=vsphere --node-ip 10.1.1.11

/usr/lib/systemd/system/kubelet.servide.d/10-kubeadm.conf , i have added cloud-provider parameters also in KUBELET_KUBECONFIG_ARGS, Notice that vsphere path doesnt needed on worker nodes, it is for master only

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--cloud-provider=vsphere --cloud-config=/etc/kubernetes/pki/vsphere.conf --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

Я не знаю, почему ниже сообщения журнала начинают появляться на узлах, даже проблема решена, я открою еще одну проблему о.

 kubelet: W0520 10:41:04.360268   37697 vsphere.go:577] Failed to patch IP as MAC address "02:42:af:cc:*:*" does not belong to a VMware platform

большое спасибо

Другие вопросы по тегам