• 首页 首页 icon
  • 工具库 工具库 icon
    • IP查询 IP查询 icon
  • 内容库 内容库 icon
    • 快讯库 快讯库 icon
    • 精品库 精品库 icon
    • 问答库 问答库 icon
  • 更多 更多 icon
    • 服务条款 服务条款 icon

安装 K8S, Bigip, Gateway API 测试环境 (1)

武飞扬头像
几十年黄钻会员
帮助1

准备环境

  • K8s controller node: vxlan-k8s
  • K8s pod node (两个): vxlan-test-1 vxlan-test-2
  • BigIP: vxlan-bigip
    学新通

k8s cluster 网络拓扑

学新通

调整 Linux 环境(所有 CentOS node):

在每个 CentOS node 执行如下操作:

  • 升级
# 同时升级了 systemd
sudo yum -y update
  • 安装工具
sudo yum install -y nc git wget etcd
  • 测试端口是否被占用:
sudo nc 127.0.0.1 6443
  • 关闭 swap
# 暂时关闭
sudo free -m && swapoff -a && free -m
# 关闭后,不用重启

# 永久关闭
sudo sed -i.bak -r 's/(.  swap . )/#\1/' /etc/fstab && cat /etc/fstab
# 关闭后需要重启
  • 设置 selinux
# 暂时设置为 permissive 模式
sudo setenforce 0
# 设置后不用重启

# 永久设置为 permissive 模式
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# 设置后需要重启
  • 关闭防火墙
sudo systemctl stop firewalld.service
sudo systemctl disable firewalld.service

安装 container runtime(所有 CentOS node)

安装前准备工作

Forwarding IPv4 and letting iptables see bridged traffic

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

# Verify that the br_netfilter, overlay modules are loaded by running below instructions:
lsmod | grep br_netfilter
lsmod | grep overlay

# Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, net.ipv4.ip_forward system variables are set to 1 in your sysctl config by running below instruction:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
学新通

Container runtime 有多种选择:

  • containerd
  • CRI-O
  • Docker Engine
  • Mirantis Container Runtime

这里选择自己比较常用的 Docker。

安装 docker engine

基本上 follow 安装 docker engine for centos 文档就可以了。

# The centos-extras repository must be enabled. 
# The overlay2 storage driver is recommended.


# Uninstall old versions
# Images, containers, volumes, and networks stored in /var/lib/docker/ aren’t automatically removed when you uninstall Docker.
 sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
                  
# Set up the repository                  
 sudo yum install -y yum-utils
 sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
    
# Install the latest version
 sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Start Docker
 sudo systemctl start docker
# Verify that Docker Engine installation is successful by running the hello-world image.
 sudo docker run hello-world
学新通

Container runtime 设置 Cgroup drivers

Cgroup 用于管理进程(pod)的资源(cpu/memory)占用量。主要有两种:

cgroupfs:

  1. The cgroupfs driver is the default cgroup driver in the kubelet.
  2. The cgroupfs driver is not recommended when systemd is the init system because systemd expects a single cgroup manager on the system.
  3. if you use cgroup v2 , use the systemd cgroup driver instead of cgroupfs

systemd:

  1. if you use systemd as the init system with the cgroupfs driver, the system gets two different cgroup managers. systemd for the rest of the processes become unstable under resource pressure.
  2. If you configure systemd as the cgroup driver for the kubelet, you must also configure systemd as the cgroup driver for the container runtime (container runtime 我们使用的是 docker).

选择接近于操作系统 init process 的 Cgroup driver。通过如下命令可以查看到 init process (PID 1)的 Cgroup driver。

[root@vxlan-k8s ~]# sudo stat /proc/1/exe
  File: '/proc/1/exe' -> '/usr/lib/systemd/systemd'
  Size: 0         	Blocks: 0          IO Block: 1024   symbolic link
Device: 3h/3d	Inode: 9537        Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:system_r:init_t:s0
Access: 2023-02-18 22:31:57.149735874 -0800
Modify: 2023-02-11 22:19:31.529999998 -0800
Change: 2023-02-11 22:19:31.529999998 -0800

# If the system you're on gives /sbin/init as a result
sudo stat /proc/1/exe
  File: '/proc/1/exe' -> '/sbin/init'
stat /sbin/init
  File: ‘/sbin/init’ -> ‘/lib/systemd/systemd’

这里可以看到,CentOS init process 使用 systemdThe cgroupfs driver is not recommended when systemd is the init system because systemd expects a single cgroup manager on the system.

kubectl 和 container runtime(docker)都需要使用到 cgroup,使用的类型必须一致。

To interface with control groups, the kubelet and the container runtime need to use a cgroup driver. It’s critical that the kubelet and the container runtime uses the same cgroup driver and are configured the same.

docker 配置使用 systemd

docker 可以在 /etc/docker/daemon.json 里面配置 cgroup driver

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

配置完后 sudo systemctl restart docker

# Verify by running the hello-world image.
 sudo docker run hello-world

如果出现 error 请升级 systemd 或者 yum update -y

kubectl 配置使用 systemd

在 kubeadm 配置文件中添加 systemd 的配置,如下。

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
...
cgroupDriver: systemd

具体过程在 安装 kubeadm 中会提到。

安装 CRI adapter (cri-dockerd)(所有 CentOS node)

This adapter provides a shim for Docker Engine that lets you control Docker via the Kubernetes Container Runtime Interface.

这里 cri-docker github repository 文档中将 cri-docker 定义为一个为了适配 Container Runtime interface 的适配器,并不是 CRI 本身。

因为这里我的 container runtimedocker,同时 kubelet 在 1.24 版本后,把 dockerCRI 从本身代码中移除了。 所以需要先手动安装一个 dockerCRI

Note: Docker Engine does not implement the CRI which is a requirement for a container runtime to work with Kubernetes. For that reason, an additional service cri-dockerd has to be installed. cri-dockerd is a project based on the legacy built-in Docker Engine support that was removed from the kubelet in version 1.24.

Docker CRI adapter 安装 follow 这个 github repo 的 README.md 就可以:

# 下载代码
git clone https://github.com/Mirantis/cri-dockerd.git

# 安装
# Run these commands as root
###Install GO###
wget https://storage.谷歌apis.com/golang/getgo/installer_linux
chmod  x ./installer_linux
./installer_linux
source ~/.bash_profile

cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket

# (每个节点)启动后检查服务
[root@vxlan-k8s ~]# systemctl status cri-docker.service
● cri-docker.service - CRI Interface for Docker Application Container Engine
   Loaded: loaded (/etc/systemd/system/cri-docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2023-02-12 01:11:00 PST; 1 weeks 0 days ago
     Docs: https://docs.mirantis.com
 Main PID: 22415 (cri-dockerd)
    Tasks: 12
   Memory: 24.2M
   CGroup: /system.slice/cri-docker.service
           └─22415 /usr/local/bin/cri-dockerd --container-runtime-endpoint fd://

Feb 12 03:09:29 vxlan-k8s.pdsea.f5net.com cri-dockerd[22415]: time="2023-02-12T03:09:29-08:00" level=info msg="Stop pulling image docker.io/flannel/fl...v1.1.2"
Feb 12 03:09:41 vxlan-k8s.pdsea.f5net.com cri-dockerd[22415]: time="2023-02-12T03:09:41-08:00" level=info msg="Pulling image docker.io/flannel/flannel...57.7kB"
Feb 12 03:09:48 vxlan-k8s.pdsea.f5net.com cri-dockerd[22415]: time="2023-02-12T03:09:48-08:00" level=info msg="Stop pulling image docker.io/flannel/fl...0.21.1"
Feb 12 03:09:56 vxlan-k8s.pdsea.f5net.com cri-dockerd[22415]: time="2023-02-12T03:09:56-08:00" level=info msg="Will attempt to re-write config file /v...t.com]"
Feb 12 03:09:56 vxlan-k8s.pdsea.f5net.com cri-dockerd[22415]: time="2023-02-12T03:09:56-08:00" level=info msg="Will attempt to re-write config file /v...t.com]"
Feb 12 03:09:56 vxlan-k8s.pdsea.f5net.com cri-dockerd[22415]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[st...
Feb 12 03:09:56 vxlan-k8s.pdsea.f5net.com cri-dockerd[22415]: delegateAdd: netconf sent to delegate plugin:
Feb 12 03:09:56 vxlan-k8s.pdsea.f5net.com cri-dockerd[22415]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10...ridge"}
Feb 12 03:09:56 vxlan-k8s.pdsea.f5net.com cri-dockerd[22415]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[st...
Feb 12 03:09:56 vxlan-k8s.pdsea.f5net.com cri-dockerd[22415]: delegateAdd: netconf sent to delegate plugin:
学新通

这个 README.md 文档里面有提到,cri-dockerd 默认使用的 container network interface 是 cni, 这里使用默认的即可。

The default network plugin for cri-dockerd is set to cni on Linux. To change this, --network-plugin=${plugin} can be passed in as a command line argument if invoked manually, or the systemd unit file

安装完 cri-dockerd 后,这里有个对照表,cri-dockerd 监听的 socket 是 unix:///var/run/cri-dockerd.sock, 后续在配置 kubeadm 会使用到。

# Linux Runtime Path to Unix domain socket
containerd	unix:///var/run/containerd/containerd.sock
CRI-O	unix:///var/run/crio/crio.sock
Docker Engine (using cri-dockerd)	unix:///var/run/cri-dockerd.sock

安装 kubeadm, kubelet 和 kubectl(所有 CentOS node)

  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.

kubeadm 不会自动管理和安装 kubelet 和 kubectl。 这里需要注意安装的版本。kubeadm 安装的 k8s API server 可以兼容比它版本地的 kubelet。比如 1.8.0 版本的 k8s API server 可以兼容 1.7.0 版本的 kubelet。

Linux 上具体安装命令如下:

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.谷歌.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.谷歌.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet
学新通

使用 kubeadm(k8s master 节点)

这里使用 kubeadm 的配置文件安装集群。创建的配置文件如下:

# [root@vxlan-k8s ~]# cat kubeadm-config.yaml
kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
localAPIEndpoint:
  advertiseAddress: "10.10.110.225"
  bindPort: 6443
nodeRegistration:
  # https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
  criSocket: "unix:///var/run/cri-dockerd.sock"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.26.0
networking:
  serviceSubnet: "10.96.0.0/16"
  podSubnet: "10.200.0.0/16"
  dnsDomain: "cluster.local"
clusterName: "pzhang-cluster"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
学新通

以上配置定义都可以参考 kubeadm-config API

关键配置:

  • 这里 criSocket: "unix:///var/run/cri-dockerd.sock" 使用到了之前 cri-docker adpater 的监听 socket。如果不写,kubeadm 会发现有两个 socket 存在,可能会报错 Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
  • k8s API server 监听在 IP advertiseAddress: "10.10.110.225", 端口 bindPort: 6443
  • 当前 k8s 集群中,pod 的子网在 podSubnet: "10.200.0.0/16" 网段内比如 10.200.1.0/24
  • 当前 k8s 集群中,service 子网在 serviceSubnet: "10.96.0.0/16" 网段内
  • 之前配置 CR 的 cgroup driver 的时候,选择了 systemd,所以这里 cgroupDriver: systemd

创建完 kubeadm-config.yaml 配置文件后,运行如下命令创建 k8s cluster mater节点:

# For test only: kubeadm init --dry-run --cri-socket=unix:///var/run/cri-dockerd.sock
kubeadm init --config kubeadm-config.yaml

运行完上面的程序后,创建一个文件,保存 worker node 的添加方式,后续使用如下命令添加 worker nodes

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.110.225:6443 --token 4xxxxxx4ihkxxxxoy \
	--discovery-token-ca-cert-hash sha256:cxxxxxx21e41b5xxxxx028bb8bcxxxxxxxx3d93ffd
学新通

根据上面的文本配置 admin.conf,就可以使用 kubectl 命令。

配置安装 flannel(k8s master 节点)

k8s 集群需要安装 a Pod network add-on 这里我选择的是 flannel

下载 flannel 装文件做修改

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

flannel 修改后的文件如下:

#[root@vxlan-k8s ~]# cat kube-flannel.yml
apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
  name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.200.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-flannel
spec:
  selector:
    matchLabels:
      app: flannel
      k8s-app: flannel
  template:
    metadata:
      labels:
        app: flannel
        k8s-app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens192
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: docker.io/flannel/flannel:v0.21.1
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
        name: install-cni-plugin
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image: docker.io/flannel/flannel:v0.21.1
        name: install-cni
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      serviceAccountName: flannel
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
        name: run
      - hostPath:
          path: /opt/cni/bin
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
        name: cni
      - configMap:
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock
学新通

定义 flannel 管理的网络 subnet

  net-conf.json: |
    {
      "Network": "10.200.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

Flannel 使用 vxlan overlay network
这里选择的 "Network": "10.200.0.0/16" 作为 Flannel 管理的网段。注意这里的配置必须和kubeadm 部署 cluster 时定义的 pod subnet 一致,如下 podSubnet。这里为什么选择 /16,因为 Flannel 默认自动化部署后,会将每个 node 再划分为 /16 的子网 /24

kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.26.0
networking:
  serviceSubnet: "10.96.0.0/16"
  podSubnet: "10.200.0.0/16"
  dnsDomain: "cluster.local"
clusterName: "pzhang-cluster"
  • 在 Flannel 启动的时候可以定义 pod subnet 所在的 interface 如下:
  • 定义 Flannel 运行时,使用的 interface - --iface=ens192。如果不同的 k8s worker node 用不同的 interface 这里可以使用正则表达式。比如一个 node 使用 enp0s8 , 另一个 node 使用 enp0s9,可以使用这样的配置 --iface-regex=[enp0s8|enp0s9]

如果想对 Flannel 的 network 做更多修改可以参考: Configuration
比如下面是对分配 pod 的 subnet 来做修改。

SubnetLen (integer): The size of the subnet allocated to each host. Defaults to 24 (i.e. /24) unless Network was configured to be smaller than a /22 in which case it is two less than the network.

{
	"Network": "10.0.0.0/8",
	"SubnetLen": 20, // 控制 pod 上可分配的 IP 数量 https://stackoverflow.com/questions/47882274/flannel-config-use-more-than-255-nodes
	"SubnetMin": "10.10.0.0",
	"SubnetMax": "10.99.0.0",
	"Backend": {
		"Type": "udp",
		"Port": 7890
	}
}

配置完以上,基本可以达到搭建 k8s cluster 测试环境的目的。
运行安装 Flannelkubectl apply -f kube-flannel.yml

添加 cluster worker node (所有 worker 节点)

kubeadm 添加 woker node 的方式很简单,kubeadm 运行完成后的 log 中已经说明了添加 node 的命令:

kubeadm join 10.10.110.225:6443 --token 4xxxxxx4ihkxxxxoy \
	--discovery-token-ca-cert-hash sha256:cxxxxxx21e41b5xxxxx028bb8bcxxxxxxxx3d93ffd

在每个 node 上运行以上命令即可。Flannel 会将每个 node 划分为之前定义的 10.200.0.0/16 的子网比如:

  • master: 10.200.0.0/24 (master 节点上默认有taint 标记不能用于部署pod。)
  • worker node1:10.200.1.0/24
  • worker node2: 10.200.2.0/24

测试 k8s 集群 (master 节点)

完成以上所有配置后,基本完成了 k8s flannel 集群的搭建。
到此,可以使用 k8s 搭建一个 deployment,测试一下网络联通性。
在 k8s master 节点上创建 dep.yaml,运行kubectl apply -f dep.yaml

# [root@vxlan-k8s test]# cat dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hostnames
spec:
  selector:
    matchLabels:
      app: hostnames
  replicas: 3
  template:
    metadata:
      labels:
        app: hostnames
    spec:
      containers:
      - name: hostnames
        image: k8s.gcr.io/serve_hostname
        ports:
        - containerPort: 9376
          protocol: TCP
学新通

查看 deployment:

[root@vxlan-k8s test]# kubectl get deployment -o wide
NAME               READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES                      SELECTOR
hostnames          3/3     3            3           12d    hostnames    k8s.gcr.io/serve_hostname   app=hostnames

查看 deployment 部署的 pods:

[root@vxlan-k8s test]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS       AGE    IP           NODE                           NOMINATED NODE   READINESS GATES
hostnames-75ccd46585-kpcjl          1/1     Running   0              12d    10.200.2.2   vxlan-test-2.pdsea.f5net.com   <none>           <none>
hostnames-75ccd46585-qws8x          1/1     Running   0              12d    10.200.1.2   vxlan-test-1.pdsea.f5net.com   <none>           <none>
hostnames-75ccd46585-x9z2m          1/1     Running   0              12d    10.200.2.3   vxlan-test-2.pdsea.f5net.com   <none>           <none>

这里可以看到 pod 的 IP,deployment 的 pods 被分到了不同 subnet 的 worker node。

[root@vxlan-k8s ~]# kubectl run -i --tty busybox --image=busybox --restart=Never -- ping 10.200.2.2
If you don't see a command prompt, try pressing enter.
64 bytes from 10.200.2.2: seq=1 ttl=62 time=1.204 ms
64 bytes from 10.200.2.2: seq=2 ttl=62 time=1.363 ms
64 bytes from 10.200.2.2: seq=3 ttl=62 time=1.240 ms
64 bytes from 10.200.2.2: seq=4 ttl=62 time=1.241 ms

[root@vxlan-k8s ~]# kubectl delete pod busybox
pod "busybox" deleted
[root@vxlan-k8s ~]# kubectl run -i --tty busybox --image=busybox --restart=Never -- ping 10.200.1.2
If you don't see a command prompt, try pressing enter.
64 bytes from 10.200.1.2: seq=1 ttl=64 time=0.072 ms
64 bytes from 10.200.1.2: seq=2 ttl=64 time=0.118 ms
64 bytes from 10.200.1.2: seq=3 ttl=64 time=0.082 ms
64 bytes from 10.200.1.2: seq=4 ttl=64 time=0.084 ms
^C
--- 10.200.1.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.072/0.127/0.282 ms
[root@vxlan-k8s ~]# kubectl delete pod busybox
pod "busybox" deleted
[root@vxlan-k8s ~]#  curl 10.200.2.2:9376
hostnames-75ccd46585-kpcjl

BigIP 添加到 k8s cluster

安装 K8S, Bigip, Gateway API 测试环境 (2)

这篇好文章是转载于:学新通技术网

  • 版权申明: 本站部分内容来自互联网,仅供学习及演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,请提供相关证据及您的身份证明,我们将在收到邮件后48小时内删除。
  • 本站站名: 学新通技术网
  • 本文地址: /boutique/detail/tanhfjbeai
系列文章
更多 icon
同类精品
更多 icon
继续加载