基于VirtualBox + CentOS搭建单节点Kubernetes

基于VirtualBox + CentOS搭建单节点Kubernetes,第1张

基于VirtualBox + CentOS搭建单节点Kubernetes 软件版本

Windows 11 家庭中文版 Insider Preview

VirtualBox 6.1.26 r145957 (Qt5.6.2)

CentOS 8.5.2111-x86_64


虚拟机规格

CPU:2

内存:4G

硬盘:20G


安装系统 Step 1. 通过方向键选择“Install CentOS Linux 8”,单击“Enter”进入安装向导。

Step 2. 语言选择“English”,键盘选择“English (United States)”,单击“Continue”进入下一步。

 Step 3. 按以下信息进行配置,单击“Begin Installation”开始安装,等待安装完成。
Installation Destination:sda / Automatic
Software Selection:Minimal / Standard
Time & Date:Asia/Shanghai timezone

 Step 4. 单击“Reboot System”重启虚拟机,完成安装。


系统配置 网络配置 Step 1. 执行以下命令查看网卡状态。
# nmcli device status
DEVICE  TYPE        STATE           ConNECTION
enp0s3  ethernet    disconnected    --
lo      loopback    unmanaged       --
Step 2. 执行以下命令启用网络设备。
# nmcli connection up enp0s3
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)
Step 3. 执行以下命令,允许网络设备自动激活。
# nmcli connection modify enp0s3 connection.autoconnect yes

安装Docker Step 1. 执行以下命令,下载docker-ce的repo文件。
curl https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
Step 2. 执行以下命令,安装containerd.io依赖。
yum install https://download.docker.com/linux/fedora/30/x86_64/stable/Packages/containerd.io-1.2.6-3.3.fc30.x86_64.rpm
Step 3. 执行以下命令,安装docker-ce。
yum -y install docker-ce
Step 4. 执行以下命令,设置docker服务自启动。
systemctl enable docker
Step 5. 新建目录,作为docker的数据目录。
mkdir /opt/docker
Step 6. 新建“/etc/docker/daemon.json”文件,修改docker数据目录。
{
    "data-root": "/opt/docker"
}
Step 7. 执行以下命令,重启docker服务。
systemctl restart docker
Step 8. 执行以下命令,查看docker数据目录。(默认为/var/lib/docker)
docker info

回显信息如下:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.7.1-docker)
  scan: Docker Scan (Docker Inc., v0.12.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.12
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc version: v1.0.2-0-g52b36a2
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.18.0-348.el8.x86_64
 Operating System: CentOS Linux 8
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.649GiB
 Name: k8s-01
 ID: HWE5:KXKC:3HZ2:LTAG:7L4S:A4LL:OZK7:UYRQ:HIH6:MDPK:Q56P:6GL3
 Docker Root Dir: /opt/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

其中“Docker Root Dir”已修改为“/opt/docker”。


安装Kubernetes Step 1. 新建“/etc/yum.repos.d/kubernetes.repo”,配置镜像仓(使用阿里镜像仓)。
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Step 2. 执行以下命令,关闭swap。
swapoff -a
Step 3. 修改“/etc/fstab”,注释掉swap分区配置。
#
# /etc/fstab
# Created by anaconda on Thu Dec 30 17:03:32 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=745157b8-dbe9-49c7-a3e5-66a0a18ca1f6 /boot                   xfs     defaults        0 0
#/dev/mapper/cl-swap     none                    swap    defaults        0 0
Step 4. 在“/etc/docker/daemon.json”中增加配置,使docker的cgroup driver为systemd。
{
    "data-root": "/opt/docker",
    "exec-opts": [
        "native.cgroupdriver=systemd"
    ]
}
Step 5. 执行以下命令,重启docker服务。
systemctl restart docker
Step 6. 执行以下命令,安装kubectl、kubelet、kubeadm工具。
yum install kubectl kubelet kubeadm
Step 7. 执行以下命令,放开6443和10250端口。
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --reload
Step 8. 执行以下命令,初始化kubernetes环境。

注意:

  • apiserver-advertise-address,为本地IP,对外提供API服务
  • image-repository,为镜像仓库,默认为k8s.grc.io,国内无法访问,这里将其指定为阿里云的镜像仓
# kubeadm init --kubernetes-version=1.23.1 
> --apiserver-advertise-address=192.168.3.13 
> --image-repository registry.aliyuncs.com/google_containers 
> --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

初始化成功的回显信息如下:

[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.3.13]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.3.13 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.3.13 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.502328 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: a4ulub.41r2btv5x8wzmoi4
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBEConFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.3.13:6443 --token a4ulub.41r2btv5x8wzmoi4 
        --discovery-token-ca-cert-hash sha256:44551f8925148b25cf5e34593712f20eee980866ee1f2ebdb3df959495277d09
Step 9. 根据回显信息,执行以下命令,完成后续 *** 作。
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):(id -g) $HOME/.kube/config
Step 10. 执行以下命令,使“kubectl”命令可以自动填充。
source <(kubectl completion bash)
Step 11. 执行以下命令,查看节点状态。
kubectl get nodes

回显信息如下:

NAME		STATUS		ROLES					AGE		VERSION
k8s-master	NotReady	control-plane,master	167m	v1.23.1
Step 12. 执行以下命令,查看pod状态。
kubectl get pod --all-namespaces

回显信息如下:

NAMESPACE	NAME								READY	STATUS		RESTARTS	AGE
kube-system	coredns-6d8c4cb4d-ch74q				0/1		Pending		0			166m
kube-system	coredns-6d8c4cb4d-npkzb				0/1		Pending		0			166m
kube-system	etcd-k8s-master						1/1		Running		0			166m
kube-system	kube-apiserver-k8s-master			1/1		Running		0			166m
kube-system	kube-controller-manager-k8s-master	1/1		Running		0			166m
kube-system	kube-proxy-42rjg					1/1		Running		0			166m
kube-system	kube-scheduler-k8s-master			1/1		Running		0			166m

根据回显信息,k8s-master的节点状态为“NotReady”,是因为“coredns-xxx”的pod尚未启动,需要配置网络。

Step 13. 执行以下命令,安装calico网络。
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

回显信息如下:

configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
Step 14. 重新执行查看节点和pod状态的命令。
# kubectl get nodes
NAME		STATUS	ROLES					AGE		VERSION
k8s-master	Ready	control-plane,master	3h2m	v1.23.1
# kubectl get pod --all-namespaces
NAMESPACE	NAME											READY	STATUS		        RESTARTS	AGE
kube-system	calico-kube-controllers-647d84984b-pmdc4		0/1		ContainerCreating	0			5m17s
kube-system	calico-node-fwvbt							    1/1		Running				0			5m18s
kube-system	coredns-6d8c4cb4d-ch74q						    1/1		Running				0			3h2m
kube-system	coredns-6d8c4cb4d-npkzb						    1/1		Running				0			3h2m
kube-system	etcd-k8s-master								    1/1		Running				0			3h2m
kube-system	kube-apiserver-k8s-master						1/1		Running				0			3h2m
kube-system	kube-controller-manager-k8s-master			    1/1		Running				0			3h2m
kube-system	kube-proxy-42rjg								1/1		Running				0			3h2m
kube-system	kube-scheduler-k8s-master						1/1		Running				0			3h2m

节点和各pod均运行正常。


安装dashboard Step 1. 下载recommended.yaml配置文件。
wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml

因为我一直没有下载成功,提示以下错误。

# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
--2022-01-03 20:19:18--  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 0.0.0.0, ::
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|0.0.0.0|:443... failed: Connection refused.
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|::|:443... failed: Connection refused.

在尝试了各种方法后依然没有解决,因此最后是直接下载的源文件。源文件路径为:

https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml

因为无法访问tags,因此直接使用的当前master分支的配置。配置的内容为:

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR ConDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.4.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
Step 2. 修改配置,使可以在外网访问。

在Service配置中,增加“spec.type=NodePort”,并在“spec.ports”中增加“nodePort=30443”,使其可在外网访问。

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30443
  selector:
    k8s-app: kubernetes-dashboard
Step 3. 执行以下命令,安装dashboard。
kubectl apply -f recommended.yaml
Step 4. 查看Pod状态,Dashboard处于Running状态。
# kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS    RESTARTS      AGE
kube-system            calico-kube-controllers-647d84984b-pmdc4     1/1     Running   1 (18h ago)   19h
kube-system            calico-node-fwvbt                            1/1     Running   1 (18h ago)   19h
kube-system            coredns-6d8c4cb4d-ch74q                      1/1     Running   1 (18h ago)   21h
kube-system            coredns-6d8c4cb4d-npkzb                      1/1     Running   1 (18h ago)   21h
kube-system            etcd-k8s-master                              1/1     Running   1 (18h ago)   21h
kube-system            kube-apiserver-k8s-master                    1/1     Running   1 (18h ago)   21h
kube-system            kube-controller-manager-k8s-master           1/1     Running   1 (18h ago)   21h
kube-system            kube-proxy-42rjg                             1/1     Running   1 (18h ago)   21h
kube-system            kube-scheduler-k8s-master                    1/1     Running   1 (18h ago)   21h
kubernetes-dashboard   dashboard-metrics-scraper-799d786dbf-gp8ts   1/1     Running   0             33m
kubernetes-dashboard   kubernetes-dashboard-7577b7545-z6gnx         1/1     Running   0             33m
Step 5. 通过30443端口访问dashboard。


创建admin用户 Step 1. 使用以下内容创建“admin-user.yaml”文件。
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
Step 2. 执行以下命令,创建admin用户。
kubectl apply -f admin-user.yaml
Step 3. 使用以下内容创建“admin-user-role-binding.yaml”文件。
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
Step 4. 执行以下命令绑定admin用户角色。
kubectl apply -f admin-user-role-binding.yaml
Step 5. 执行以下命令,获取用户admin用户的token。
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print }')

回显信息如下,使用回显信息中的token登录dashboard即可。

Name:         admin-user-token-jg796
Namespace:    kube-system
Labels:       
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 1d3d0732-200e-41cf-bd8c-3bcc67f358c3

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImJrYkYzVm9teGU3aEFfOVJ6SjR3d016OTdvQTZsWUtFNllnT3MwbFhXa28ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpnNzk2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxZDNkMDczMi0yMDBlLTQxY2YtYmQ4Yy0zYmNjNjdmMzU4YzMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.kxtzUlQ6YD6iewEFuazPxptYsieBYo07EYyPzXI0Kl7m81nVo5ZH4HShAt0M_gnj1ZLON8i20ZUhuqLrNQVqyRfMs6JlZdOq7ljTlvNAoiy_MhGmsMDDIQBCCtQOASunrKqNKd61HxwBFfk9ThLXfPnPs2cW3sAAqYEK92IWZbV_VnuYX7YgJVTw7H9SCgYVmRel6Gx-Ws5U097uhL3-QifHBgxHBeCH4KDZ0wwKsaYBXPmKMpA_uClj_eZ7oqEmKlVjOioVnoY8BDIYsNhrVg6NsSHDVbaMbvuusDkvDq3-chJpBAoYxc645FKZaRftLdIa2ja6apF8z2W8fg7xwQ
ca.crt:     1099 bytes

常见问题 启动虚拟机失败,提示“The VM session was closed before any attempt to power it on”。

解决方法:

打开任务管理器,结束名称为“VirtualBox Headless Frontend”的任务。


参考资料

centos8安装docker - Adrian·Ding - 博客园https://www.cnblogs.com/ding2016/p/11592999.html

使用kubeadm在Centos8上部署kubernetes1.18_Kubernetes中文社区https://www.kubernetes.org.cn/7189.html

Get “http://localhost:10248/healthz“:connect: connection refused._grown_Liu的博客-CSDN博客https://blog.csdn.net/grown_Liu/article/details/121623863

centos 8 使用 kubeadm 安装 kubernetes 1.8.2 最新完整教程|kubernetes 最新安装教程,k8s 1.8.2安装最新,centos 安装 k8s 1.8.2CDCN-码上中国博客https://www.blog-china.cn/blog/liuzaiqingshan/home/254/1591593688584

欢迎分享,转载请注明来源:内存溢出

原文地址:https://54852.com/zaji/5693924.html

(0)
打赏 微信扫一扫微信扫一扫 支付宝扫一扫支付宝扫一扫
上一篇 2022-12-17
下一篇2022-12-17

发表评论

登录后才能评论

评论列表(0条)

    保存