• Cloud-Native Container ProductsCloud-Native Container Products
    • KubeSphere Enterprisehot
    • KubeSphere Virtualizationhot
    • KubeSphere Enterprise HCI
  • Cloud-Native ServiceCloud-Native Service
    • KubeSphere Backuphot
    • KubeSphere Litenew
    • KubeSphere Inspectornew
  • Public Cloud Container ServicePublic Cloud Container Service
    • KubeSphere on AWS
    • KubeSphere on DigitalOcean

FAQ

Summarizes FAQ about KSV.

This topic summarizes frequently asked questions (FAQ) about KubeSphere Virtualization (KSV).

How do I do if an error is reported in OpenSSL when I install KSV?

Description

When you install KSV, an error is reported in OpenSSL, as shown in the following code block.

error: openssl: /usr/lib/x86_64-linux-gnu/libssl.so.1.1: version `OPENSSL_1_1_1' not found (required by openssl) openssl: /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1: version `OPENSSL_1_1_1' not found (required by openssl) : exit status 1

This error is reported because the system software version is outdated.

Solution

You can run the apt update and apt upgrade commands to update the system package. Then, reinstall KSV.

How do I do if a disk error is reported when I install KSV?

Description

KSV failed to be installed because a disk error is reported.

Solution

  1. Run the following script to clear the disk used for Ceph storage:

    DISK="/dev/sdX" \# Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean) sgdisk --zap-all $DISK \# Wipe a large portion of the beginning of the disk to remove more LVM metadata that may be present dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync \# SSDs may be better cleaned with blkdiscard instead of dd blkdiscard $DISK \# Inform the OS of partition table changes partprobe $DISK
  2. Reinstall KSV.

How do I do if CoreDNS fails when I install KSV?

Description

When you install KSV, CoreDNS fails and its status is not Running.

Solution

  1. Run kubectl get pod -n kube-system to query the status of CoreDNS. If the status of CoreDNS is not Running, modify the configuration of CoreDNS.

  2. Run kubectl edit cm -n kube-system coredns to modify the configuration of CoreDNS.

  3. Change forward . /etc/resolv.conf to forward .8.8.8.8. Then, save and exit.

  4. Run kubectl rollout restart deploy -n kube-system coredns to restart CoreDNS.

How do I do if the installation of KSV remains stuck in MinIO?

Description

KSV failed to be installed and the installation remains stuck in MinIO. This issue can occur if CoreDNS fails or the storage becomes abnormal. Check the status of CoreDNS and the storage for troubleshooting. Note that the storage system must be a device that is unformatted or non-partitioned.

Solution

  1. Run ksv logs to query logs. If the installation fails due to an error in MinIO, the following result is returned:

    1. check the storage configuration and storage server 1. make sure the DNS address in /etc/resolv.conf is available 1. execute 'kubectl logs -n kubesphere-system -l job-name=minio-make-bucket-job' to watch logs 1. execute 'helm -n kubesphere-system uninstall ks-minio && kubectl -n kubesphere-system delete job minio-make-bucket-job' 1. Restart the installer pod in kubesphere-system namespace
  2. Query the status of CoreDNS. If CoreDNS fails, you can troubleshoot errors based on xref:../04-faq.adoc#b4d3a37901204ebfbeb90051b5710e7f[How do I do if CoreDNS fails when I install KSV.

  3. Query the status of the storage. If the storage becomes abnormal, perform the following steps to troubleshoot:

  4. Run kubectl get pod -n rook-ceph to query the status of the Ceph component.

  5. Check whether a pod named in the format of rook-ceph-osd-x-xxxxx exists. If such a pod exists, check whether the status of the pod is Running.

  6. If no such a pod exists, check whether an unformatted server disk is reserved for Ceph storage.

  7. If such a disk is reserved, run the script provided in Precautions for installation to clear the disk. Then, restart the server.

Why is a message returned indicating that my server does not support virtualization when I install KSV?

Description

When you install KSV, the system returns the following information, indicating that virtualization is not supported by the server:

Some server nodes do not support virtualization.

This error message can occur on some ARM64 servers that are developed by Chinese vendors. Virtualization is supported by these servers. The error is returned because simulation is enabled during KSV installation.

Solution

  1. Edit the cluster configuration file kubectl edit cc -n kubesphere-system ks-installer. Change the value of useEmulation in spec.virtualization to false.

  2. Save and exit. Query logs to verify that the setting takes effect.

How do I reset a user password?

  1. Log in to the server node on whichKSVis installed.

  2. Run the following command to reset a user password:

    kubectl patch users <username> -p '{"spec":{"password":"<new password>"}}' --type='merge' && kubectl annotate users <username> iam.kubesphere.io/password-encrypted-

    common:NOTE

    • You need to replace the <username> parameter in the command with the actual username and change the <new password> parameter to the new password.

    • We recommend that you specify a password that contains digits and letters and the length can be 6 to 64 characters.

Why does my VM get stuck in the Pending state?

Pending indicates that the system is waiting for resources required for VM creation. You can see if the disks corresponding to the VMs were created. Disk creation failures are often caused by the excessive disk size. In this case, you can delete the VMs and reduce the disk size when you re-create the VMs.

Why does my VM get stuck in the Unschedulable state?

Unschedulable indicates that resource scheduling failed. The possible causes and solutions are as follows:

Possible CauseSolution
The parameter values of the CPU or memory size are too large, leading to VM scheduling failure.
You can delete the VMs and reduce the CPU or memory size when you re-create them.
Some nodes do not support virtualization.
Log in to any server node as user root and run the grep -E '(svm|vmx)' /proc/cpuinfo command. Virtualization is not supported if no command output is returned.

Which formats of image files can I upload to KSV?

KSV allows you to upload image files in IMG, QCOW2, XZ, TAR, GZ, ISO, and RAW formats. Uploading image files using HTTP is also supported if server nodes can access the Internet.

If an image file is excessively large, can I use other methods to upload the file?

KSV supports uploading an image from local storage or a URL. If the image file is excessively large, you can use FTP to upload.

Solution

  1. Log in to a server node as user root.

  2. Run the following command to obtain the username and password used to log in to the FTP server:

    ksv ftp-server
  3. Run the following command to connect to any server nodes in the cluster:

    ftp <node ip address>

    common:NOTE

    Replace <node IP address> with your own server node IP address.

  4. Log in to the FTP server using the username and password obtained in step 2.

  5. Run the following command to upload your local image file to the FTP server.

    put <local image path>

    common:NOTE

    Replace <local image path> with your own local image file path.

    For example:

    put /pitrix/img/ubuntu_16.04_LTS_server-cloudimg-amd64.img
  6. On the Upload Image page of theKSVweb console, upload the image file from URL http://minio.kubesphere-system.svc:9000/qingcloud-images/ubuntu_16.04_LTS_server-cloudimg-amd64.img.

    common:NOTE

    Replace ubuntu_16.04_LTS_server-cloudimg-amd64.img with the name of the image file to be uploaded.

Why is a mounted disk still unavailable?

When dynamically mounting a disk that is created separately, the VM needs to be restarted before the disk takes effect. For future hot swaps, you do not need to restart the VM.

Can KSV support external storage devices?

Yes, you can connect KSV to external storage. The external storage sever must support Container Storage Interface (CSI). If CSI is supported, we can perform the adaptation and test based on the external CSI.

How do I set up a unified endpoint for Internet access?

Scenario

In a multi-node cluster, only one server node can access the Internet and VMs on other server nodes must access the Internet from this server node.

How it works

The server node that can access the Internet must have two NICs, for example, NIC A and NIC B, of which NIC A can only connect to the intranet. By running the following commands, you can translate the private IP address of NIC A to the IP address of NIC B for accessing the Internet.

Solution

  1. Log in to the node server that can access the Internet as user root.

  2. Run the following command:

    iptables -t nat -A POSTROUTING -s <intranet cidr block> -o <nic b name> -j MASQUERADE

    Replace <intranet CIDR block> and <NIC B name> with your own CIDR block and NIC name, for example:

    iptables -t nat -A POSTROUTING -s 192.168.0.0/16 -o eno2 -j MASQUERADE
  3. Log in to other node servers that need to access the Internet as user root.

    ip r add default via <private ip address>

    Replace <private IP address> with the private IP address of the node that can access the Internet, for example:

    ip r add default via 192.168.0.1

KubeSphere ®️ © QingCloud Technologies 2022