• Cloud-Native Container ProductsCloud-Native Container Products
    • KubeSphere Enterprisehot
    • KubeSphere Virtualizationhot
    • KubeSphere Enterprise HCI
  • Cloud-Native ServiceCloud-Native Service
    • KubeSphere Backuphot
    • KubeSphere Litenew
    • KubeSphere Inspectornew
  • Public Cloud Container ServicePublic Cloud Container Service
    • KubeSphere on AWS
    • KubeSphere on DigitalOcean

Install KSV on KubeSphere

Describes how to install KSV on KubeSphere.

In KubeSphere Virtualization (KSV) v1.6.0 or later, you can deploy KSV and KubeSphere in hybrid mode. This way, you can manage containers and VM workloads at the same time. This topic describes how to install KSV on KubeSphere.

In this example, we use the open source tool KubeKey to install KubeSphere and KSV. To obtain more information about KubeKey, visit GitHub KubeKey.

Prerequisites

Before you install KSV, make sure the following conditions are met:

  • Before you install KSV, we recommend that you run the following script to clear the disk for Ceph storage. Replace the drive letter in the script with the actual one. After the disk is cleared, restart the server.

    DISK="/dev/sdX" \# Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean) sgdisk --zap-all $DISK \# Wipe a large portion of the beginning of the disk to remove more LVM metadata that may be present dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync \# SSDs may be better cleaned with blkdiscard instead of dd blkdiscard $DISK \# Inform the OS of partition table changes partprobe $DISK
  • If environment conditions can be met or KSV is to be installed in a production environment, we recommend that you use SSDs in etcd. By default, the data directory of etcd is /var/lib/etcd.

  • If the size of the system disk is small or KSV is to be installed in a production environment, we recommend that you mount a disk of 100 GB or larger to the directory /var/lib/rancher.

  • Each server node on which KSV is installed must run on Linux, and the kernel version of Linux must be 4.0 or later. We recommend that you use one of the following OSs: Ubuntu 18.04, Ubuntu 20.04, CentOS 7.9, Unity Operating System (UOS), Kylin V10, and EulerOS. Some unknown issues may exist if you use an OS other than the preceding ones. More OSs will be supported in later versions.

    Query the OS version:

    cat /etc/issue
  • Make sure your server node meets the following conditions:

    HardwareMinimumRecommended
    CPU cores
    4 cores
    8 cores
    Memory
    8 GB
    16 GB
    System disk
    100 GB
    200 GB

    Query the number of CPU cores of your server:

    cat /proc/cpuinfo | grep "processor" | sort | uniq | wc -l

    Query the memory size:

    cat /proc/meminfo | grep MemTotal

    Query the available disk size:

    df -hl
  • The server node must have at least one disk that is unformatted and unpartitioned, or one unformatted partition. The minimum size of the disk or partition is 100 GB and the recommended size is 200 GB. The minimum size of the disk or partition is 100 GB and the recommended size is 200 GB.

    Query the disk partition of the server node:

    lsblk -f

    For example, the following command output indicates that vdb meets the requirements:

    NAME FSTYPE LABEL UUID MOUNTPOINT vda └─vda1 LVM2_member >eSO50t-GkUV-YKTH-WsGq-hNJY-eKNf-3i07IB ├─ubuntu--vg-root ext4 c2366f76-6e21-4f10-a8f3-6776212e2fe4 / └─ubuntu--vg-swap_1 swap 9492a3dc-ad75-47cd-9596-678e8cf17ff9 [SWAP] vdb
  • The server node must support virtualization. If a server node does not support virtualization, an error is reported and KSV cannot be installed.

    Query whether a server node supports virtualization. If the command output is empty, the server node does not support virtualization.

    • x86 architecture
    grep -E '(svm|vmx)' /proc/cpuinfo
    • ARM64 architecture
    ls /dev/kvm

Install dependencies

The dependencies for socat, conntrack, ebtables, and ipset are required for your server node. If you have installed the dependencies on your server node, skip this step.

If your server node runs on Ubuntu, run the following command to install the dependencies:

sudo apt install socat conntrack ebtables ipset -y

If your server node runs on an OS of another type, replace apt with the actual software package management tool.

Deploy a Kubernetes cluster

  1. Download KubeKey:

    curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.5 sh -
  2. Grant kk permissions to execute the binary file:

    sudo chmod +x kk
  3. Create a cluster configuration file:

    ./kk create config
  4. Edit file config-sample.yaml:

    vi config-sample.yaml

    Sample code:

    apiVersion: kubekey.kubesphere.io/v1alpha2 kind: Cluster metadata: name: sample spec: hosts: - {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: ubuntu, password: "Qcloud@123"} - {name: node2, address: 172.16.0.3, internalAddress: 172.16.0.3, user: ubuntu, password: "Qcloud@123"} roleGroups: etcd: - node1 control-plane: - node1 worker: - node1 - node2 controlPlaneEndpoint: ## Internal loadbalancer for apiservers # internalLoadbalancer: haproxy domain: lb.kubesphere.local address: "" port: 6443 kubernetes: version: v1.23.10 clusterName: cluster.local autoRenewCerts: true containerManager: docker etcd: type: kubekey network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni multusCNI: enabled: false
  5. In spec:hosts of file config-sample.yaml, configure your server node.

    ParameterDescription
    name
    The custom name of the node.
    address
    The node IP address for SSH login.
    internalAddress
    The internal IP address of the node.
    user
    The username for SSH login. You must specify as user root or a user that has permissions to run sudo commands. If you leave this parameter empty, user root is specified by default.
    password
    The password for SSH login.
  6. In spec:network:plugin of file config-sample.yaml, set the value to kubeovn:

    Sample code:

    network: plugin: kubeovn
  7. In spec:network:multusCNI:enabled of file config-sample.yaml, set the value to true:

    Sample code:

    multusCNI: enabled: true
  8. Create the Kubernetes cluster:

    ./kk create cluster -f config-sample.yaml

Deploy KubeSphere and KSV in hybrid mode

  1. Run the following command to create an installer:
cat <<EOF | kubectl create -f - --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata:   name: clusterconfigurations.installer.kubesphere.io spec:   group: installer.kubesphere.io   versions:   - name: v1alpha1     served: true     storage: true     schema:       openAPIV3Schema:         type: object         properties:           spec:             type: object             x-kubernetes-preserve-unknown-fields: true           status:             type: object             x-kubernetes-preserve-unknown-fields: true   scope: Namespaced   names:     plural: clusterconfigurations     singular: clusterconfiguration     kind: ClusterConfiguration     shortNames:     - cc --- apiVersion: v1 kind: Namespace metadata:   name: kubesphere-system --- apiVersion: v1 kind: ServiceAccount metadata:   name: ks-installer   namespace: kubesphere-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:   name: ks-installer rules: - apiGroups:   - ""   resources:   - '*'   verbs:   - '*' - apiGroups:   - apps   resources:   - '*'   verbs:   - '*' - apiGroups:   - extensions   resources:   - '*'   verbs:   - '*' - apiGroups:   - batch   resources:   - '*'   verbs:   - '*' - apiGroups:   - rbac.authorization.k8s.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - apiregistration.k8s.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - apiextensions.k8s.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - tenant.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - certificates.k8s.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - devops.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - monitoring.coreos.com   resources:   - '*'   verbs:   - '*' - apiGroups:   - logging.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - jaegertracing.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - storage.k8s.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - admissionregistration.k8s.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - policy   resources:   - '*'   verbs:   - '*' - apiGroups:   - autoscaling   resources:   - '*'   verbs:   - '*' - apiGroups:   - networking.istio.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - config.istio.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - iam.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - notification.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - auditing.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - events.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - core.kubefed.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - installer.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - storage.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - security.istio.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - monitoring.kiali.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - kiali.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - networking.k8s.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - kubeedge.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - types.kubefed.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - scheduling.k8s.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - kubevirt.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - cdi.kubevirt.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - network.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - virtualization.kubesphere.io   resources:   - '*'   verbs:   - '*' - apiGroups:   - snapshot.storage.k8s.io   resources:   - '*'   verbs:   - '*' --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:   name: ks-installer subjects: - kind: ServiceAccount   name: ks-installer   namespace: kubesphere-system roleRef:   kind: ClusterRole   name: ks-installer   apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata:   name: ks-installer   namespace: kubesphere-system   labels:     app: ks-install spec:   replicas: 1   selector:     matchLabels:       app: ks-install   template:     metadata:       labels:         app: ks-install     spec:       serviceAccountName: ks-installer       containers:       - name: installer         image: kubespheredev/ksv-installer:{ks_product_ver}         imagePullPolicy: "Always"         resources:           limits:             cpu: "1"             memory: 1Gi           requests:             cpu: 20m             memory: 100Mi         volumeMounts:         - mountPath: /etc/localtime           name: host-time       volumes:       - hostPath:           path: /etc/localtime           type: ""         name: host-time EOF
  1. Run the following command to create a cluster configuration file:

    cat <<EOF | kubectl create -f - --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata:   name: ks-installer   namespace: kubesphere-system   labels:     version: v3.3.1 spec:   persistence:     storageClass: ""   authentication:     jwtSecret: ""   etcd:     monitoring: false     endpointIps: localhost     port: 2379     tlsEnable: true   common:     core:       console:         enabled: true         enableMultiLogin: true         port: 30880         type: NodePort     redis:       enabled: false       enableHA: false       volumeSize: 2Gi     openldap:       enabled: false       volumeSize: 2Gi     minio:       volumeSize: 20Gi     monitoring:       endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090       GPUMonitoring:         enabled: false     gpu:       kinds:       - resourceName: "nvidia.com/gpu"         resourceType: "GPU"         default: true     es:       logMaxAge: 7       elkPrefix: logstash       basicAuth:         enabled: false         username: ""         password: ""       externalElasticsearchHost: ""       externalElasticsearchPort: ""   alerting:     enabled: false   auditing:     enabled: false   devops:     enabled: false     jenkinsMemoryLim: 8Gi     jenkinsMemoryReq: 4Gi     jenkinsVolumeSize: 8Gi   events:     enabled: false   logging:     enabled: false     logsidecar:       enabled: true       replicas: 2   metrics_server:     enabled: false   monitoring:     storageClass: ""     node_exporter:       port: 9100     gpu:       nvidia_dcgm_exporter:         enabled: false   multicluster:     clusterRole: none   network:     networkpolicy:       enabled: false     ippool:       type: none     topology:       type: none   openpitrix:     store:       enabled: false   servicemesh:     enabled: false     istio:         components:         ingressGateways:         - name: istio-ingressgateway           enabled: false         cni:           enabled: false   edgeruntime:     enabled: false     kubeedge:       enabled: false       cloudCore:         cloudHub:           advertiseAddress:             - ""           service:           cloudhubNodePort: "30000"           cloudhubQuicNodePort: "30001"           cloudhubHttpsNodePort: "30002"           cloudstreamNodePort: "30003"           tunnelNodePort: "30004"       iptables-manager:         enabled: true         mode: "external"   gatekeeper:     enabled: false      virtualization:     enabled: true     expressNetworkMTU: 1300     useEmulation: false     cpuAllocationRatio: 1     console:       port: 30890       type: NodePort   terminal:     timeout: 600 EOF
  2. Run the following command to query deployment logs:

    kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

    The deployment may take 10 to 30 minutes based on your hardware and network environment.If the following command output appears, the deployment is successful:

    \##################################################### \### Welcome to KubeSphere! ### \##################################################### Console: http://192.168.0.2:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. \##################################################### https://kubesphere.io 20xx-xx-xx xx:xx:xx \#####################################################
  1. Obtain the IP address, admin account, and password from Console, Account, and Password parameters, and use a browser to log in to the KubeSphere web console.

    common:NOTE

    Depending on your network environment, you may need to configure traffic forwarding rules and enable port 30880 in the firewall.

KubeSphere ®️ © QingCloud Technologies 2022