• Cloud-Native Container ProductsCloud-Native Container Products
    • KubeSphere Enterprisehot
    • KubeSphere Virtualizationhot
    • KubeSphere Enterprise HCI
  • Cloud-Native ServiceCloud-Native Service
    • KubeSphere Backuphot
    • KubeSphere Litenew
    • KubeSphere Inspectornew
  • Public Cloud Container ServicePublic Cloud Container Service
    • KubeSphere on AWS
    • KubeSphere on DigitalOcean

Upgrade KSV

Describes how to upgrade KSV.

This topic describes how to upgrade KubeSphere Virtualization (KSV). KSV uses the open-source tool KubeKey to create a private registry during upgrade. For more information about KubeKey, visit GitHub KubeKey repository.

Prerequisites

  • Make sure the current version of KSV is v1.5.0. If you use KSV in a version earlier than v1.5.0, upgrade it to v1.5.0 before you perform the following steps.

  • Upgrading KSV to v1.6.1 will replace the CNI with Kube-OVN, during which the networks for pods will be refactored. To prevent data loss, make sure you have backed up all critical data, such as network data, users, and VMs.

Step 1: Push an image to a local registry

  1. Transfer the installation package of KSV v1.6.1 to a node on which you want to perform upgrade, and log in to the node.

  2. Decompress the package and go to the directory generated. Replace <package name> with the actual name of the installation package and <directory> with the actual directory generated after the package is decompressed.

    tar -zxvf <package name>
    cd <directory>
  3. Grant kk permissions to execute the binary file:

    sudo chmod +x kk
  4. Edit the configuration file config-sample.yaml:

    vi config-sample.yaml
  5. In spec:hosts, specify the server nodes on which KSV is to be upgraded.

    Sample code:

    hosts: - {name: node1, address: 172.31.50.23, internalAddress: 172.31.50.23, user: yop, password: "zhu1241jie"} - {name: node2, address: 172.31.50.24, internalAddress: 172.31.50.24, user: yop, password: "zhu1241jie"} - {name: node3, address: 172.31.50.25, internalAddress: 172.31.50.25, user: yop, password: "zhu1241jie"} roleGroups: etcd: - node1 - node2 - node3 control-plane: - node1 - node2 - node3 worker: - node1 - node2 - node3 registry: - node1

    The following table describes the parameters.

    ParameterDescription
    hosts
    A list of server nodes.
    name
    The name of the node.
    address
    The node IP address for SSH login.
    internalAddress
    The internal IP address of the node.
    user
    The username for SSH login. You must specify as user root or a user that has permissions to run sudo commands. If you leave this parameter empty, user root is specified by default.
    password
    The password for SSH login.
    roleGroups
    A list of node roles.
    etcd
    The node on which etcd is installed. In most cases, etcd is installed on a control node.
    control-plane
    The control node. You can configure multiple control nodes in the cluster.
    worker
    The worker node. You can create VMs on worker nodes. In multi-node mode, a cluster must have at least three worker nodes. A control node can run as a worker node at the same time.
    registry
    Specifies the node on which the registry is deployed. In most cases, the value is the first node in the cluster.
  6. In spec:registry:privateRegistry, configure the registry as follows, and save the file.

    registry: privateRegistry: "dockerhub.kubekey.local:5000" auths: "dockerhub.kubekey.local:5000": skipTLSVerify: true namespaceOverride: "" registryMirrors: [] insecureRegistries: []
  7. Run the following command to push the installation image to the registry:

    bin/kk artifact images push -f config-sample.yaml -a kubekey-artifact.tar.gz

    If image push failed due to unarchive failure of kubekey-artifact.tar.gz, run the following command to clear bin/kubekey:

    rm -rf bin/kubekey/*

    If the following command output appears, the image is pushed:

    _ __ _______ __ | |/ // ____\\ \\ / / | ' /| (___ \\ \\ / / |  <  \___ \  \ \/ /   | . \\ ____) | \\ / |_|\_\_____/ \\/ 19:49:26 CST [GreetingsModule] Greetings 19:49:26 CST message: [node3] Greetings, KubeKey! 19:49:26 CST message: [node1] Greetings, KubeKey! 19:49:27 CST message: [node2] Greetings, KubeKey! 19:49:27 CST success: [node3] 19:49:27 CST success: [node1] 19:49:27 CST success: [node2] 19:49:27 CST [UnArchiveArtifactModule] Check the KubeKey artifact md5 value 19:49:27 CST success: [LocalHost] 19:51:16 CST success: [LocalHost] 19:51:16 CST [ChownWorkerModule] Chown ./kubekey dir 19:51:16 CST success: [LocalHost] 19:51:16 CST Pipeline[ArtifactImagesPushPipeline] execute successfully

Step 2: Delete Calico

  1. Delete the network resources on Kubernetes:

    kubectl delete -f /etc/kubernetes/network-plugin.yaml
  2. Verify that Calico is deleted:

    kubectl get pod -n kube-system

    If no pod related to Calico is returned, Calico is deleted:

    NAME READY STATUS RESTARTS AGE coredns-7448499f4d-974cv 1/1 Running 0 11h kube-multus-ds-amd64-9cclk 1/1 Running 0 11h kube-multus-ds-amd64-f5ksq 1/1 Running 0 11h kube-multus-ds-amd64-wxb87 1/1 Running 0 11h
  3. Clear Calico-related configuration:

    rm -f /etc/cni/net.d/10-calico.conflist /etc/cni/net.d/calico-kubeconfig rm -f /opt/cni/bin/calico /opt/cni/bin/calico-ipam rm -rf /etc/cni/net.d/00-multus.conf

    common:NOTE

    If KSV is installed in multi-node mode, run the preceding command on each node to clear Calico configuration.

Step 3: Install Kube-OVN

  1. Download the installation script of Kube-OVN:

    wget https://raw.githubusercontent.com/kubeovn/kube-ovn/release-1.10/dist/images/install.sh
  2. Edit the file install.sh:

    vim install.sh

    Modify REGISTRY and VERSION based on the following settings:

    REGISTRY="dockerhub.kubekey.local:5000/kubeovn" VERSION="v1.10.7"
  3. Run the following script to install Kube-OVN:

    bash install.sh

    In step 4, run the following command to restart Multus:

    kubectl -n kube-system rollout restart ds kube-multus-ds-amd64

Step 4: Modify the ks-installer image

  1. Modify the ks-installer image:

    kubectl edit deploy ks-installer -n kubesphere-system

    Modify spec:containers:image based on the following settings:

    spec: containers: - image: dockerhub.kubekey.local:5000/kubespheredev/ksv-installer:v1.6.1 imagePullPolicy: Always name: ks-installer

    common:NOTE

    The upgrade may require a long period of time. Wait until all processes are complete.

  2. After KSV is upgraded, run the following command to query the upgrade logs:

    ksv logs

    If the following command output appears, KSV is upgraded:

    \*\*\*\* Waiting for all tasks to be completed ... task openpitrix status is successful (1/5) task network status is successful (2/5) task multicluster status is successful (3/5) task virtualization status is successful (4/5) task monitoring status is successful (5/5) \*\*\*\* Collecting installation results ... \##################################################### \### Welcome to Kubesphere Virtualization! ### \##################################################### Console: http://172.31.50.23:30880 Username: admin Password: P@88wOrd
  3. In the command output, obtain Console, Account, and Password values to log in to the KSV web console.

    common:NOTE

    Depending on your network environment, you may need to configure traffic forwarding rules and enable port 30880 in firewalls.

  4. On the VMs page, view the status of each VM. In normal cases, each VM is in the KsVmUpgradeProcess state.

Step 5: Create subnet mappings

After KSV is upgraded, you must create subnet mappings to ensure network connections. If you use physical subnets for IP pools in v1.5.0, you must configure physical subnets in the new version. If you use virtual subnets for IP pools in v1.5.0, you must configure virtual subnets in the new version. You can perform the following steps as needed:

Configure virtual subnets

In this example, the virtual subnet vxnet-w6vks5m0: 172.50.10.0/24 is used in KSV v1.5.0. The subnet does not fall in the range of available VPCs. In this case, you must configure an IP range for VPCs.

  1. Configure an IP range for VPCs on the node on which KSV v1.6.1 is installed.

    kubectl edit ksv ksv-config -n kubesphere-virtualization-system

    Modify spec:configuration:cidrBlocks based on the following settings, and save the file.

    spec: configuration: accessMode: ReadWriteMany cidrBlocks: - 10.10.0.0/16 - 10.20.0.0/16 - 10.30.0.0/16 - 10.40.0.0/16 - 10.50.0.0/16 - 10.60.0.0/16 - 10.70.0.0/16 - 10.80.0.0/16 - 10.90.0.0/16 - 172.16.0.0/16 - 172.17.0.0/16 - 172.18.0.0/16 - 172.19.0.0/16 - 172.20.0.0/16 - 192.168.0.0/16 - 172.50.0.0/16
  2. Restart ksv-apiserver to apply the new configuration:

    kubectl rollout restart deploy ksv-apiserver -n kubesphere-virtualization-system
  3. In the KSV web console, go to the list of VPCs, and click Create in the upper-right corner.

  4. In the dialog box that appears, configure parameters for the VPC.

    • IP Range: Set the value to 172.50.0.0/16 that you configure in previous steps.

    • Default Virtual Subnet: The third part of the IP address range of virtual subnets must be the same as that in v1.5.0. In this example, use 172.50.10.0/24.

    common:NOTE

    For more information, see Create a VPC.

  5. Associate the IP pool with the virtual subnets in Kube-OVN:

    \# Delete webhook configuration kubectl delete validatingwebhookconfigurations network.kubesphere.io \# Modify IP pool mappings (record the ID of the IP pool in v1.5.0 or press F12 on the VMs page to query) kubectl edit ippool.network.kubesphere.io -n kubesphere-system vxnet-w6vks5m0 \# Add annotations to associate the IP pool with new virtual subnets subnet.network.kubesphere.io: net-yqmrrfta \# Restart ksv-controller-manager kubectl -n kubesphere-virtualization-system rollout restart deploy ksv-controller-manager
  6. Return to the KSV web console and verify that the virtual subnets in v1.5.0 are mapped to those in Kube-OVN. In normal cases, the IP addresses of the VMs are the same as those in v1.5.0.

Configure physical subnets

If you use physical subnets in KSV v1.5.0, you can perform the following steps to create physical subnets after KSV is upgraded:

  1. In the KSV web console, go to the list of physical subnets, and click Create in the upper-right corner.

  2. In the dialog box that appears, configure parameters for the physical subnet.

    common:NOTE

    The IP range of physical subnets in the new version remains the same as that in v1.5.0.

  3. Associate the IP pool with the physical subnets in Kube-OVN:

    \# Delete webhook configuration kubectl delete validatingwebhookconfigurations network.kubesphere.io \# Modify IP pool mappings (record the ID of the IP pool in v1.5.0 or press F12 on the VMs page to query) kubectl edit ippool.network.kubesphere.io -n kubesphere-system vxnet-3i9y4tq3 \# Add annotations to associate the IP pool with new virtual subnets subnet.network.kubesphere.io: net-ox3vg6fa \# Restart ksv-controller-manager kubectl -n kubesphere-virtualization-system rollout restart deploy ksv-controller-manager
  4. Return to the KSV web console and verify that the physical subnets in v1.5.0 are mapped to those in Kube-OVN. In normal cases, the IP addresses of the VMs are the same as those in v1.5.0.

Step 6: Modify the user kind

  1. Generate the user configuration for v1.5.0:

    kubectl get user -o yaml &gt; new_user.yaml
  2. Replace the script and delete redundant fields:

    \#!/bin/bash sed -i "s/User/KsvUser/g" new_user.yaml sed -i "/creationTimestamp/d" new_user.yaml sed -i "/finalizers:/d" new_user.yaml sed -i "/finalizers.kubesphere.io/d" new_user.yaml sed -i "/generation:/d" new_user.yaml sed -i "/resourceVersion:/d" new_user.yaml sed -i "/uid:/d" new_user.yaml sed -i "/lastTransitionTime:/d" new_user.yaml sed -i "/lastLoginTime:/d" new_user.yaml sed -i "/state:/d" new_user.yaml sed -i "/status:/d" new_user.yaml
  3. Run and apply the script:

    \# Run the script sh user.sh \# Apply the script kubectl apply -f new_user.yaml

Step 7: Migrate the activation code

Assume that the preceding steps have been performed. If you have activated KSV v1.5.0 and the license has not expired, perform the following steps to query the original activation code, and use the activation code returned to activate KSV v1.6.1:

  1. Query the original activation code:

    kubectl -n kubesphere-system get secrets ks-license -ojsonpath='{.data.license}'
  2. Replicate the activation code returned to the KSV web console and activate KSV v1.6.1. For information about how to activate the license, see Activate the license.

KubeSphere ®️ © QingCloud Technologies 2022