Kubernetes 1.11.3 发布,容器编排工具

@[TOC]

初始化

设置主机名

1
hostnamectl  set-hostname docker01

配置 dns

echo nameserver 114.114.114.114>>/etc/resolv.conf

停止防火墙

systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld

关闭Swap

swapoff -a
sed ‘s/.swap./#&/‘ /etc/fstab

Selinux

1
2
3
[ -f /etc/selinux/config ] && sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
[ -f /etc/sysconfig/selinux ] && sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux
[ -x /usr/sbin/setenforce ] && /usr/sbin/setenforce 0

配置 dns

echo nameserver 114.114.114.114>>/etc/resolv.conf

修改内核参数

1
2
3
4
5
6
7
8
9
10
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

sysctl -p /etc/sysctl.d/k8s.conf

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

swapoff -a

k8s rpm install

1
2
3
4
5
6
7
8
9
10
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
1
2
3
4
5
6
7
8
9
指定版本下载安装:
离线下载rpm包
mkdir k8srpm && cd k8srpm
yum install -y yum-utils
yumdownloader kubectl-1.11.3 kubelet-1.11.3 kubeadm-1.11.3 kubernetes-cni

yum install ./*.rpm -y

systemctl enable kubelet.service

关于 kubelet cgroup driver

当前版本,使用Docker时,kubeadm会自动检测kubelet的cgroup驱动程序,并在运行时将其设置在/var/lib/kubelet/kubeadm-flags.env文件中。

因此,无需修改 kubelet cgroup driver

准备镜像

1
2
3
4
5
6
7
8
9
docker pull k8s.gcr.io/kube-apiserver-amd64:v1.11.3
docker pull k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
docker pull k8s.gcr.io/kube-proxy-amd64:v1.11.3
docker pull k8s.gcr.io/kube-scheduler-amd64:v1.11.3
docker pull k8s.gcr.io/pause:3.1
docker pull k8s.gcr.io/coredns:1.1.3
docker pull quay.io/calico/node:v3.1.3
docker pull quay.io/calico/cni:v3.1.3
docker pull k8s.gcr.io/etcd-amd64:3.2.18

kube-proxy 使用 ipvs 模式

  1. 对于Kubernetes v1.10及更高版本,功能门默认 SupportIPVSProxyMode设置为true。但是,在v1.10之前版本用您需要 --feature-gates=SupportIPVSProxyMode=true
  2. 指定 proxy-mode=ipvs
  3. 安装需要的内核模块和软件包
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    安装
    yum install ipvsadm -y
    加载
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    检查
    cut -f1 -d " " /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4
    输出
    ip_vs_sh
    ip_vs_wrr
    ip_vs_rr
    nf_conntrack_ipv4
    ip_vs

详细IPVS

kubernetes系列之六:安装kubernets v1.10.0和ipvs mode kube-proxy

Kubeadm Init 初始化

将kubeadm init与配置文件一起使用

kubeadm init使用配置文件而不是命令行标志进行配置,而某些更高级的功能可能仅作为配置文件选项提供。该文件在–config选项中传递。

Kubernetes 1.11及更高版本中,可以使用==kubeadm config print-default==命令打印出默认配置 。这是建议您在旧迁移v1alpha1配置v1alpha2使用kubeadm配置迁移命令,因为v1alpha1会在Kubernetes 1.12被删除。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# cat kubeadm-init.conf
apiVersion: kubeadm.k8s.io/v1alpha2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
imageRepository: k8s.gcr.io
kind: MasterConfiguration
api:
advertiseAddress: 192.168.1.241
bindPort: 6443
controlPlaneEndpoint: ""
kubeProxy:
config:
mode: "ipvs"
kubeletConfiguration:
baseConfig:
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
kubernetesVersion: v1.11.3
networking:
dnsDomain: cluster.local
podSubnet: "10.42.0.0/16"
serviceSubnet: 10.96.0.0/12

这里没有配置etcd ,kubeadm init 自动安装etcd。
Kubeadm默认在mater节点上的kubelet管理的静态pod中运行单节点的etcd集群。这不是高可用性设置,因为etcd集群只包含一个成员,并且无法维持任何成员变得不可用。

以上主要修改2点:

  1. kubeProxy 启用 ipvs
  2. 指定 pod 子网段为 10.42.0.0/16
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# kubeadm init --config kubeadm-init.conf  
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
I0920 15:57:53.117081 30624 kernel_validator.go:81] Validating kernel version
I0920 15:57:53.117279 30624 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.241]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.1.241 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 49.001944 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node localhost.localdomain as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node localhost.localdomain as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
[bootstraptoken] using token: wadx1n.9gfusllys13qk1yj
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.1.241:6443 --token se4mgd.ukmg2hxxg42gt65t --discovery-token-ca-cert-hash sha256:0a737675e1e37aa4025077b27ced8053fe84c363df11c506bfb512b88408697e

启动完毕

卸载 kubeadm 安装

要撤消kubeadm所执行的操作,您应首先排空节点,并确保节点在关闭之前为空。

使用适当的凭据与主人交谈,运行:

1
2
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

然后,在要删除的节点上,重置所有kubeadm安装状态:

kubeadm reset

如果您希望重新开始,只需运行kubeadm initkubeadm join使用适当的参数。

验证集群

  1. 按照提示,复制配置文件
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    # kubectl -n kube-system get po -o wide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
    coredns-78fcdf6894-2p6gp 0/1 Pending 0 2m <none> <none> <none>
    coredns-78fcdf6894-wnz4p 0/1 Pending 0 2m <none> <none> <none>
    etcd-test01 1/1 Running 0 1m 192.168.1.241 test01 <none>
    kube-apiserver-test01 1/1 Running 0 1m 192.168.1.241 test01 <none>
    kube-controller-manager-test01 1/1 Running 0 1m 192.168.1.241 test01 <none>
    kube-proxy-8wjp8 1/1 Running 0 2m 192.168.1.241 test01 <none>
    kube-scheduler-test01 1/1 Running 0 1m 192.168.1.241 test01 <none>

由于还没有部署网络,coredns 处于 Pending 状态。

  1. 验证 cgroup-driver 与 docker cgroup-driver 是否匹配:

    1
    2
    # cat /var/lib/kubelet/kubeadm-flags.env
    KUBELET_KUBEADM_ARGS=--cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
  2. 验证 kube-proxy IPVS 模式

1
2
3
4
5
6
7
8
# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 192.168.1.241:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
UDP 10.96.0.10:53 rr

安装calico网络

安装pod网络附加组件

有关使用Calico的更多信息,请参阅Kubernetes上的 Calico 快速入门,为策略和网络安装Calico以及其他相关资源。

calico 默认pod网络为192.168.0.0/16, 上面kubeadm初始化时指定了 networking.podSubnet: "10.42.0.0/16"
这里需要修改calico.yaml

1
2
加载rbac策略
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
1
2
3
4
5
6
修改 CALICO_IPV4POOL_CIDR
wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
sed -i 's/192.168.0.0/10.42.0.0/g' calico.yaml
grep 0.0 calico.yaml

kubectl apply -f calico.yaml

查看容器状态

1
2
3
4
5
6
7
8
9
10
# kubectl  -n kube-system get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
calico-node-sfxvc 2/2 Running 0 32s 192.168.1.241 test01 <none>
coredns-78fcdf6894-2p6gp 1/1 Running 0 28m 10.42.0.2 test01 <none>
coredns-78fcdf6894-wnz4p 1/1 Running 0 28m 10.42.0.3 test01 <none>
etcd-test01 1/1 Running 0 27m 192.168.1.241 test01 <none>
kube-apiserver-test01 1/1 Running 0 27m 192.168.1.241 test01 <none>
kube-controller-manager-test01 1/1 Running 0 27m 192.168.1.241 test01 <none>
kube-proxy-8wjp8 1/1 Running 0 28m 192.168.1.241 test01 <none>
kube-scheduler-test01 1/1 Running 0 27m 192.168.1.241 test01 <none>

增加了一个 calico 容器,coredns 启动成功。

查看 ipvs 列表:

1
2
3
4
5
6
7
8
9
10
11
12
13
# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 192.168.1.241:6443 Masq 1 4 0
TCP 10.96.0.10:53 rr
-> 10.42.0.2:53 Masq 1 0 0
-> 10.42.0.3:53 Masq 1 0 0
TCP 10.99.127.226:5473 rr
UDP 10.96.0.10:53 rr
-> 10.42.0.2:53 Masq 1 0 0
-> 10.42.0.3:53 Masq 1 0 0

去除对master的调度隔离

默认情况下,出于安全原因,您的群集不会在master服务器上安排容器。如果您希望能够在master服务器上安排pod,例如,对于用于开发的单机Kubernetes集群,请运行:

1
2
# kubectl taint nodes --all node-role.kubernetes.io/master-
node/test01 untainted

这将从node-role.kubernetes.io/master包含主节点的任何节点中删除taint,这意味着调度程序将能够在任何地方安排pod

部署服务测试

cloudnativelabs/whats-my-ip 是一个每次请求,返回当前主机的主机名和IP的服务

1
kubectl run myip --image=cloudnativelabs/whats-my-ip --replicas=3 --port=8080

创建一个服务,类型为NodePort

1
kubectl expose deployment myip --port=8080 --target-port=8080 --type=NodePort

启动一个busybox镜像容器 用于测试

1
kubectl run tools --image=byrnedo/alpine-curl --replicas=1 --command -- sleep 99999999

查看服务状态

1
2
3
4
5
6
7
8
9
10
11
# kubectl  get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
myip-5fc5cf6476-cfwjn 1/1 Running 0 15m 10.42.0.7 test01 <none>
myip-5fc5cf6476-g55f7 1/1 Running 0 15m 10.42.0.8 test01 <none>
myip-5fc5cf6476-wvxvt 1/1 Running 0 15m 10.42.0.6 test01 <none>
tools-7c7dcbc894-74m6z 1/1 Running 0 1m 10.42.0.10 test01 <none>

[root@test01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
myip NodePort 10.101.143.54 <none> 8080:30319/TCP 3m

测试myip服务状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# kubectl  exec -it tools-7c7dcbc894-74m6z  /bin/sh
测试 dns 是否正常解析
/ # nslookup myip
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: myip
Address 1: 10.101.143.54 myip.default.svc.cluster.local
访问服务测试,连续访问十次
/ # for i in $(seq 1 10);do curl myip:8080 ;done
HOSTNAME:myip-5fc5cf6476-cfwjn IP:10.42.0.7
HOSTNAME:myip-5fc5cf6476-wvxvt IP:10.42.0.6
HOSTNAME:myip-5fc5cf6476-g55f7 IP:10.42.0.8
HOSTNAME:myip-5fc5cf6476-cfwjn IP:10.42.0.7
HOSTNAME:myip-5fc5cf6476-wvxvt IP:10.42.0.6
HOSTNAME:myip-5fc5cf6476-g55f7 IP:10.42.0.8
HOSTNAME:myip-5fc5cf6476-cfwjn IP:10.42.0.7
HOSTNAME:myip-5fc5cf6476-wvxvt IP:10.42.0.6
HOSTNAME:myip-5fc5cf6476-g55f7 IP:10.42.0.8
HOSTNAME:myip-5fc5cf6476-cfwjn IP:10.42.0.7

Hairpin Mode

许多网络附加组件尚未启用Hairpin模式 ,Hairpin模式允许pod通过其服务IP访问自己。这是与CNI相关的问题 。请联系网络附加提供商以获取他们对发夹模式支持的最新状态。

calico 网络经过以上测试,pod可以通过其服务IP访问自己,无需配置 Hairpin Mode .

添加节点

节点是运行工作负载(容器和容器等)的位置。要向群集添加新节点,先执行以上的准备工作

然后执行 master 节点 kubeadm init 输出的最后一句, 即可加入集群:

kubeadm join 192.168.1.241:6443 --token se4mgd.ukmg2hxxg42gt65t --discovery-token-ca-cert-hash sha256:0a737675e1e37aa4025077b27ced8053fe84c363df11c506bfb512b88408697e

如果不知道以上命令,怎么查询 token 和 discovery-token-ca-cert-has 呢?

  1. 如果没有token,执行以下查询获得

    1
    2
    3
    # kubeadm token list
    TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
    se4mgd.ukmg2hxxg42gt65t 5h 2018-09-21T17:32:06+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
  2. 默认情况下,令牌在24小时后过期。如果在当前令牌过期后将节点加入群集,则可以通过在主节点上运行以下命令来创建新令牌:

1
2
# kubeadm token create
ih6qhw.tbkp26l64xivcca7
  1. 如果没有discovery-token-ca-cert-hash,执行以下查询获得。
    --discovery-token-ca-cert-hash 的值可以配合多个token以重复使用。
1
2
3
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
> openssl dgst -sha256 -hex | sed 's/^.* //'
0a737675e1e37aa4025077b27ced8053fe84c363df11c506bfb512b88408697e
  1. 通过创建新的token时,打印 join 命令:
1
2
# kubeadm token create --print-join-command
kubeadm join 192.168.1.241:6443 --token 771jdx.k0jtjvzgfiu8k4uv --discovery-token-ca-cert-hash sha256:0a737675e1e37aa4025077b27ced8053fe84c363df11c506bfb512b88408697e

几秒钟后,您应该注意到kubectl get nodes在master服务器上运行时输出中的此节点。

kubeadm join 详细