• Cloud-Native Container ProductsCloud-Native Container Products
    • KubeSphere Enterprisehot
    • KubeSphere Virtualizationhot
    • KubeSphere Enterprise HCI
  • Cloud-Native ServiceCloud-Native Service
    • KubeSphere Backuphot
    • KubeSphere Litenew
    • KubeSphere Inspectornew
  • Public Cloud Container ServicePublic Cloud Container Service
    • KubeSphere on AWS
    • KubeSphere on DigitalOcean

Install KSV in multi-node mode

Describes how to install KSV in multi-node mode.

This topic describes how to install KubeSphere Virtualization (KSV) in multi-node mode.

Precautions

Before you install KSV, carefully read the following precautions and make sure the conditions are met:

  • Before you install KSV, we recommend that you run the following script to clear the disk for Ceph storage. Replace the drive letter in the script with the actual one. After the disk is cleared, restart the server.

    DISK="/dev/sdX" \# Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean) sgdisk --zap-all $DISK \# Wipe a large portion of the beginning of the disk to remove more LVM metadata that may be present dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync \# SSDs may be better cleaned with blkdiscard instead of dd blkdiscard $DISK \# Inform the OS of partition table changes partprobe $DISK
  • If environment conditions can be met or KSV is to be installed in a production environment, we recommend that you use SSDs in etcd. By default, the data directory of etcd is /var/lib/etcd.

  • If the size of the system disk is small or KSV is to be installed in a production environment, we recommend that you mount a disk of 100 GB or larger to the directory /var/lib/rancher.

  • Make sure you have three server nodes available.

  • Each server node on which KSV is installed must run on Linux, and the kernel version of Linux must be 4.0 or later. We recommend that you use one of the following OSs: Ubuntu 18.04, Ubuntu 20.04, CentOS 7.9, Unity Operating System (UOS), Kylin V10, and EulerOS. Some unknown issues may exist if you use an OS other than the preceding ones. More OSs will be supported in later versions.

    Query the OS version:

    cat /etc/issue
  • Make sure each server node meets the following conditions:

    HardwareMinimumRecommended
    CPU
    4 cores
    8 cores
    Memory
    8 GB
    16 GB
    System disk
    100 GB
    100 GB

    Query the number of CPU cores of your server:

    cat /proc/cpuinfo | grep "processor" | sort | uniq | wc -l

    Query the memory size:

    cat /proc/meminfo | grep MemTotal

    Query the available disk size:

    df -hl
  • In multi-node mode, make sure at least three server nodes run as worker nodes. A control node can run as a worker node at the same time. Each worker node must have at least one disk that is unformatted and unpartitioned, or one unformatted partition. The minimum size of the disk or partition is 100 GB and the recommended size is 200 GB.

    Query the disk partition of a worker node:

    lsblk -f

    For example, the following command output indicates that vdb meets the requirements:

    NAME FSTYPE LABEL UUID MOUNTPOINT vda └─vda1 LVM2_member >eSO50t-GkUV-YKTH-WsGq-hNJY-eKNf-3i07IB ├─ubuntu--vg-root ext4 c2366f76-6e21-4f10-a8f3-6776212e2fe4 / └─ubuntu--vg-swap_1 swap 9492a3dc-ad75-47cd-9596-678e8cf17ff9 [SWAP] vdb
  • All server nodes must support virtualization. If a server node does not support virtualization, an error is reported and KSV cannot be installed.

    Query whether a server node supports virtualization. If the command output is empty, the server node does not support virtualization.

    • x86 architecture
    grep -E '(svm|vmx)' /proc/cpuinfo
    • ARM64 architecture
    ls /dev/kvm
  • The time on all server nodes must be synchronized.

Procedure

  1. Log in to a server node as user root.

  2. Query the architecture of the server node:

    uname -m
  3. Download the installation package based on the architecture of the server node:

    • x86 architecture

    common:Cover Text

    • ARM64 architecture

    common:Cover Text

  4. Decompress the installation package:

    tar -zxvf kubesphere-virtualization-<package name>.tar.gz

    common:NOTE

    In the preceding command, replace <package name> with the name of the installation package you download.

    • x86 architecture: Replace <package name> with x86_64-v1.6.1.

    • ARM64 architecture: Replace <package name> with arm64-v1.6.1.

  5. Go to the directory generated after the installation package is decompressed:

    cd kubesphere-virtualization-<file path>

    common:NOTE

    In the preceding command, replace <file path> with the name of the directory generated.

    • x86 architecture: Replace <file path> with x86_64.

    • ARM64 architecture: Replace <file path> with arm64.

  6. Run the following command to edit config-sample.yaml:

    vi config-sample.yaml
  7. Modify the hosts and roleGroups specifications in config-sample.yaml:

    Sample code:

    hosts: - {name: node1, address: 172.31.50.23, internalAddress: 172.31.50.23, user: yop, password: "zhu1241jie"} - {name: node2, address: 172.31.50.24, internalAddress: 172.31.50.24, user: yop, password: "zhu1241jie"} - {name: node3, address: 172.31.50.25, internalAddress: 172.31.50.25, user: yop, password: "zhu1241jie"} roleGroups: etcd: - node1 - node2 - node3 control-plane: - node1 - node2 - node3 worker: - node1 - node2 - node3 registry: - node1

    The following table describes the parameters.

    ParameterDescription
    hosts
    A list of server nodes.
    name
    The name of the node.
    address
    The node IP address for SSH login.
    internalAddress
    The internal IP address of the node.
    user
    The username for SSH login. You must specify as user root or a user that has permissions to run sudo commands. If you leave this parameter empty, user root is specified by default.
    password
    The password for SSH login.
    roleGroups
    A list of node roles.
    etcd
    The node on which etcd is installed. In most cases, etcd is installed on a control node.
    control-plane
    The control node. You can configure multiple control nodes in the cluster.
    worker
    The worker node. You can create VMs on worker nodes. In multi-node mode, a cluster must have at least three worker nodes. A control node can run as a worker node at the same time.
    registry
    Specifies the node on which the registry is deployed. In most cases, the value is the first node in the cluster.
  8. Install KSV:

    ./install.sh -m --ratio <overcommit ratio>

    common:NOTE

    • --ratio is an optional parameter that specifies the overcommit ratio of the KSV cluster. You can configure <overcommit ratio> based on business requirements. The value must be an integer ranging from 1 to 10. If you leave this parameter empty, the default value is 2.

    • The overcommit ratio determines the maximum number of VMs that can be created on KSV. Use the following formula to calculate: Maximum number of VMs = Number of CPU cores x Overcommit ratio.

    • You can configure the overcommit ratio only when you install KSV. You cannot modify the overcommit ratio when you upgrade KSV.

    • The installation may require a long period of time. Wait until all processes are complete.

    If the following command output appears, KSV is installed:

    \##################################################### \### Welcome to KubeSphere Virtualization! ### \##################################################### Console: http://172.31.50.59:30880 Username: admin Password: P@88w0rd NOTE: Please change the default password of the admin user after login. \##################################################### https://kubesphere.cloud/ksv/ 2022-12-01 14:03:45 \#####################################################
  9. Obtain the URL of the KSV web console, and the username and password of the system administrator from the Console, Username, and Password parameters.

    common:NOTE

    By default, KSV provides the following users:

    • System administrator: username admin and password P@88w0rd

    • Project administrator: username project-default-admin and password 123456

    • Project operator: username project-default-operator and password 123456

  10. (Optional) After KSV is installed, run the following command to query the installation logs:

    ksv logs

KubeSphere ®️ © QingCloud Technologies 2022