在 KubeSphere 中开启 KSV
介绍如何在 KubeSphere 中开启 KSV。
KSV 云原生虚拟化(KSV)v1.6.0 及之后的版本支持和 KubeSphere 容器平台融合部署,实现容器和虚拟化工作负载共存共管。本节介绍如何在 KubeSphere 中开启 KSV。
安装过程中将用到开源工具 KubeKey。有关 KubeKey 的更多信息,请访问 GitHub KubeKey 仓库。
前提条件
安装前,请阅读并遵循以下注意事项:
安装前,建议使用以下脚本清理为 Ceph 预留的磁盘。执行时请注意替换脚本中的盘符。清理完成后,请重启服务器。
DISK="/dev/sdX" \# Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean) sgdisk --zap-all $DISK \# Wipe a large portion of the beginning of the disk to remove more LVM metadata that may be present dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync \# SSDs may be better cleaned with blkdiscard instead of dd blkdiscard $DISK \# Inform the OS of partition table changes partprobe $DISK
若条件允许或当前环境为生产环境,建议在 etcd 中使用 SSD 盘。默认 etcd 的数据目录为 /var/lib/etcd。
若系统盘空间较小或当前环境为生产环境,建议在 /var/lib/rancher 目录下挂载 100 GB 及以上的硬盘。
服务器节点的操作系统需要为 Linux 操作系统,Linux 内核版本必须在 4 以上。建议使用 Ubuntu 18.04、Ubuntu 20.04、CentOS 7.9、CentOS 8.5、统信 UOS、银河麒麟 V10 或华为 EulerOS。其他操作系统尚未充分测试,可能存在未知问题。未来将支持更多操作系统。
检查操作系统版本:
cat /etc/issue
服务器节点的硬件配置必须满足以下条件:
硬件 最低配置 推荐配置 CPU4 核8 核内存8 GB16 GB系统磁盘100 GB200 GB检查 CPU 核心数:
cat /proc/cpuinfo | grep "processor" | sort | uniq | wc -l
检查内存大小:
cat /proc/meminfo | grep MemTotal
检查可用磁盘大小:
df -hl
服务器节点必须至少具有 1 个未格式化且未分区的磁盘,或 1 个未格式化的分区。该磁盘或分区的最低配置为 100 GB,推荐配置为 200 GB。
检查服务器节点磁盘分区:
lsblk -f
例如,以下回显表明 vdb 磁盘为满足条件的设备:
NAME FSTYPE LABEL UUID MOUNTPOINT vda └─vda1 LVM2_member >eSO50t-GkUV-YKTH-WsGq-hNJY-eKNf-3i07IB ├─ubuntu--vg-root ext4 c2366f76-6e21-4f10-a8f3-6776212e2fe4 / └─ubuntu--vg-swap_1 swap 9492a3dc-ad75-47cd-9596-678e8cf17ff9 [SWAP] vdb
服务器节点需要支持虚拟化。如果服务器节点不支持虚拟化,安装过程将报错,KSV 也无法成功安装。
检查服务器节点是否支持虚拟化(若无回显则不支持虚拟化):
- x86 架构:
grep -E '(svm|vmx)' /proc/cpuinfo
- ARM64 架构
ls /dev/kvm
安装依赖项
您需要为集群节点安装 socat、conntrack、ebtables 和 ipset。如果上述依赖项在集群节点上已存在,您可以跳过此步骤。
在 Ubuntu 操作系统上,执行以下命令为服务器安装依赖项:
sudo apt install socat conntrack ebtables ipset -y
如果集群节点使用其他操作系统,请将 apt 替换为操作系统对应的软件包管理工具。
部署 Kubernetes 集群
执行以下命令下载集群部署工具 KubeKey:
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.5 sh -
执行以下命令为 KubeKey 二进制文件 kk 添加执行权限:
sudo chmod +x kk
执行以下命令创建集群配置文件:
./kk create config
执行以下命令编辑安装配置文件 config-sample.yaml:
vi config-sample.yaml
以下为示例配置:
apiVersion: kubekey.kubesphere.io/v1alpha2 kind: Cluster metadata: name: sample spec: hosts: - {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: ubuntu, password: "Qcloud@123"} - {name: node2, address: 172.16.0.3, internalAddress: 172.16.0.3, user: ubuntu, password: "Qcloud@123"} roleGroups: etcd: - node1 control-plane: - node1 worker: - node1 - node2 controlPlaneEndpoint: ## Internal loadbalancer for apiservers # internalLoadbalancer: haproxy domain: lb.kubesphere.local address: "" port: 6443 kubernetes: version: v1.23.10 clusterName: cluster.local autoRenewCerts: true containerManager: docker etcd: type: kubekey network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni multusCNI: enabled: false
在 config-sample.yaml 配置文件的 spec:hosts 参数下设置各服务器的信息。
参数 描述 name用户自定义的服务器名称。address服务器的 SSH 登录 IP 地址。internalAddress服务器在子网内部的 IP 地址。user服务器的 SSH 登录用户名,该用户必须为 root 用户或其他具有 sudo 命令执行权限的用户。如果使用 root 用户可不设置此参数。password服务器的 SSH 登录密码。将 config-sample.yaml 配置文件的 spec:network:plugin 设置为 kubeovn:
示例配置如下:
network: plugin: kubeovn
将 config-sample.yaml 配置文件的 spec:network:multusCNI:enabled 设置为 true:
示例配置如下:
multusCNI: enabled: true
执行以下命令创建 Kubernetes 集群:
./kk create cluster -f config-sample.yaml
融合部署 KubeSphere 和 KSV
执行以下命令创建 installer:
cat <<EOF | kubectl create -f - --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: clusterconfigurations.installer.kubesphere.io spec: group: installer.kubesphere.io versions: - name: v1alpha1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object x-kubernetes-preserve-unknown-fields: true status: type: object x-kubernetes-preserve-unknown-fields: true scope: Namespaced names: plural: clusterconfigurations singular: clusterconfiguration kind: ClusterConfiguration shortNames: - cc --- apiVersion: v1 kind: Namespace metadata: name: kubesphere-system --- apiVersion: v1 kind: ServiceAccount metadata: name: ks-installer namespace: kubesphere-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ks-installer rules: - apiGroups: - "" resources: - '*' verbs: - '*' - apiGroups: - apps resources: - '*' verbs: - '*' - apiGroups: - extensions resources: - '*' verbs: - '*' - apiGroups: - batch resources: - '*' verbs: - '*' - apiGroups: - rbac.authorization.k8s.io resources: - '*' verbs: - '*' - apiGroups: - apiregistration.k8s.io resources: - '*' verbs: - '*' - apiGroups: - apiextensions.k8s.io resources: - '*' verbs: - '*' - apiGroups: - tenant.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - certificates.k8s.io resources: - '*' verbs: - '*' - apiGroups: - devops.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - monitoring.coreos.com resources: - '*' verbs: - '*' - apiGroups: - logging.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - jaegertracing.io resources: - '*' verbs: - '*' - apiGroups: - storage.k8s.io resources: - '*' verbs: - '*' - apiGroups: - admissionregistration.k8s.io resources: - '*' verbs: - '*' - apiGroups: - policy resources: - '*' verbs: - '*' - apiGroups: - autoscaling resources: - '*' verbs: - '*' - apiGroups: - networking.istio.io resources: - '*' verbs: - '*' - apiGroups: - config.istio.io resources: - '*' verbs: - '*' - apiGroups: - iam.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - notification.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - auditing.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - events.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - core.kubefed.io resources: - '*' verbs: - '*' - apiGroups: - installer.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - storage.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - security.istio.io resources: - '*' verbs: - '*' - apiGroups: - monitoring.kiali.io resources: - '*' verbs: - '*' - apiGroups: - kiali.io resources: - '*' verbs: - '*' - apiGroups: - networking.k8s.io resources: - '*' verbs: - '*' - apiGroups: - kubeedge.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - types.kubefed.io resources: - '*' verbs: - '*' - apiGroups: - scheduling.k8s.io resources: - '*' verbs: - '*' - apiGroups: - kubevirt.io resources: - '*' verbs: - '*' - apiGroups: - cdi.kubevirt.io resources: - '*' verbs: - '*' - apiGroups: - network.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - virtualization.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - snapshot.storage.k8s.io resources: - '*' verbs: - '*' --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ks-installer subjects: - kind: ServiceAccount name: ks-installer namespace: kubesphere-system roleRef: kind: ClusterRole name: ks-installer apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: ks-installer namespace: kubesphere-system labels: app: ks-install spec: replicas: 1 selector: matchLabels: app: ks-install template: metadata: labels: app: ks-install spec: serviceAccountName: ks-installer containers: - name: installer image: kubespheredev/ksv-installer:{ks_product_ver} imagePullPolicy: "Always" resources: limits: cpu: "1" memory: 1Gi requests: cpu: 20m memory: 100Mi volumeMounts: - mountPath: /etc/localtime name: host-time volumes: - hostPath: path: /etc/localtime type: "" name: host-time EOF
执行以下命令创建集群配置文件:
cat <<EOF | kubectl create -f - --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.3.1 spec: persistence: storageClass: "" authentication: jwtSecret: "" etcd: monitoring: false endpointIps: localhost port: 2379 tlsEnable: true common: core: console: enabled: true enableMultiLogin: true port: 30880 type: NodePort redis: enabled: false enableHA: false volumeSize: 2Gi openldap: enabled: false volumeSize: 2Gi minio: volumeSize: 20Gi monitoring: endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 GPUMonitoring: enabled: false gpu: kinds: - resourceName: "nvidia.com/gpu" resourceType: "GPU" default: true es: logMaxAge: 7 elkPrefix: logstash basicAuth: enabled: false username: "" password: "" externalElasticsearchHost: "" externalElasticsearchPort: "" alerting: enabled: false auditing: enabled: false devops: enabled: false jenkinsMemoryLim: 8Gi jenkinsMemoryReq: 4Gi jenkinsVolumeSize: 8Gi events: enabled: false logging: enabled: false logsidecar: enabled: true replicas: 2 metrics_server: enabled: false monitoring: storageClass: "" node_exporter: port: 9100 gpu: nvidia_dcgm_exporter: enabled: false multicluster: clusterRole: none network: networkpolicy: enabled: false ippool: type: none topology: type: none openpitrix: store: enabled: false servicemesh: enabled: false istio: components: ingressGateways: - name: istio-ingressgateway enabled: false cni: enabled: false edgeruntime: enabled: false kubeedge: enabled: false cloudCore: cloudHub: advertiseAddress: - "" service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" iptables-manager: enabled: true mode: "external" gatekeeper: enabled: false virtualization: enabled: true expressNetworkMTU: 1300 useEmulation: false cpuAllocationRatio: 1 console: port: 30890 type: NodePort terminal: timeout: 600 EOF
执行以下命令查看部署日志:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
取决于您的硬件和网络环境,安装过程可能需要 10 分钟到 30 分钟时间。如果显示如下信息,则表明安装成功:
\##################################################### \### Welcome to KubeSphere! ### \##################################################### Console: http://192.168.0.2:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. \##################################################### https://kubesphere.io 20xx-xx-xx xx:xx:xx \#####################################################
从成功信息中的 Console、Account 和 Password 参数分别获取 KubeSphere Web 控制台的 IP 地址、管理员用户名和管理员密码,并使用网页浏览器登录 KubeSphere Web 控制台。
common:NOTE
取决于您的网络环境,您可能需要配置流量转发规则并在防火墙中放行 30880 端口。