本文通过kubeadm来创建一个单主的kubernetes(k8s)集群,请注意文中所用软件版本,版本不符可能会导致初始化kubernetes集群失败,在尝试本文前,请了解并已熟练使用docker。
当前环境:三台Centos7.x主机,系统内核3.10.0,kubernetes版本1.18.3
三台主机IP:
192.168.50.140
192.168.50.141
192.168.50.142
hostname分别为master、node01、node02,其中master作为kubernetes的主节点,node01和node02作为两个node节点
在部署k8s集群前的准备工作
1、确保服务器之间时间一致性
通过公网同步时间
ntpdate ntp1.aliyun.com
|---|-------------------------| | 1 | ntpdate ntp1.aliyun.com |
2、确保iptables、firewalld、selinux都处于关闭状态
3、确保三台主机的hosts配置无误,即
cat /etc/hosts 192.168.50.140 master 192.168.50.141 node01 192.168.50.142 node02
|---------|----------------------------------------------------------------------------------| | 1 2 3 4 | cat /etc/hosts 192.168.50.140 master 192.168.50.141 node01 192.168.50.142 node02 |
4、配置三台主机的hostname分别为master、node01、node02
在三台主机上分别执行以下命令来更改hostname
hostnamectl set-hostname master hostnamectl set-hostname node01 hostnamectl set-hostname node02
|-------|-------------------------------------------------------------------------------------------------| | 1 2 3 | hostnamectl set-hostname master hostnamectl set-hostname node01 hostnamectl set-hostname node02 |
5、确保master可以通过密钥直接访问node01、node02
master服务器执行以下命令配置密钥访问
ssh-keygen ssh-copy-id -i .ssh/id_rsa.pub root@node01 ssh-copy-id -i .ssh/id_rsa.pub root@node02
|-------|--------------------------------------------------------------------------------------------------| | 1 2 3 | ssh-keygen ssh-copy-id -i .ssh/id_rsa.pub root@node01 ssh-copy-id -i .ssh/id_rsa.pub root@node02 |
做好准备工作后,就开始k8s集群的安装吧!
master主机配置(以下命令均在master主机上运行)
1、安装docker
yum install yum-utils device-mapper-persistent-data lvm2 -y yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install docker-ce -y systemctl enable docker
|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 | yum install yum-utils device-mapper-persistent-data lvm2 -y yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install docker-ce -y systemctl enable docker |
配置docker加速,我这里以daocloud为例
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io systemctl restart docker
|-----|--------------------------------------------------------------------------------------------------------------------------| | 1 2 | curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io systemctl restart docker |
2、配置k8s源
由于某些原因,我这里使用阿里云提供的k8s镜像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
|-------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 5 6 7 8 9 | cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF |
3、复制docker源和k8s源至node01和node02节点上
scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node01:/etc/yum.repos.d/ scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node02:/etc/yum.repos.d/
|-----|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 | scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node01:/etc/yum.repos.d/ scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node02:/etc/yum.repos.d/ |
4、安装服务
安装kubelet、kubeadm、kubectl
yum install kubelet-1.18.3 kubeadm-1.18.3 kubectl-1.18.3 -y
|---|-------------------------------------------------------------| | 1 | yum install kubelet-1.18.3 kubeadm-1.18.3 kubectl-1.18.3 -y |
5、配置docker.service
编辑/usr/lib/systemd/system/docker.service增加--exec-opt native.cgroupdriver=systemd
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
|---|-------------------------------------------------------------------------------------------------------------------------| | 1 | ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd |
systemctl daemon-reload systemctl restart docker
|-----|--------------------------------------------------| | 1 2 | systemctl daemon-reload systemctl restart docker |
6、设置kubelet开机自启动
systemctl enable kubelet
|---|--------------------------| | 1 | systemctl enable kubelet |
7、设置忽略swap(如果当前未使用swap可忽略此步骤)
sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--fail-swap-on=false"/' /etc/sysconfig/kubelet
|---|--------------------------------------------------------------------------------------------------| | 1 | sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--fail-swap-on=false"/' /etc/sysconfig/kubelet |
8、通过kubeadm初始化k8s集群
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.18.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.18.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap |
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.50.140:6443 --token fzv2cb.cdlgd2b4yivwpued \ --discovery-token-ca-cert-hash sha256:d537f87c47c9015fea3a708571dbdc5c27d921ef3e826ff67dbc0ed6d49280e4
|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.50.140:6443 --token fzv2cb.cdlgd2b4yivwpued \ --discovery-token-ca-cert-hash sha256:d537f87c47c9015fea3a708571dbdc5c27d921ef3e826ff67dbc0ed6d49280e4 |
最后看到以上内容,即表示初始化成功,请记下kubeadm join整段内容,后续在node01、node02节点加入集群时会用到。
9、根据提示运行以下命令,建议通过普通用户运行,我这里使用的root
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
|-------|---------------------------------------------------------------------------------------------------------------------| | 1 2 3 | mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config |
10、检测组件是否健康
确保OK没有问题后,再进行下一步
kubectl get cs NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health":"true"} controller-manager Healthy ok scheduler Healthy ok
|-----------|------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 5 | kubectl get cs NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health":"true"} controller-manager Healthy ok scheduler Healthy ok |
11、部署网络插件flannel
直接使用kube-flannel.yml可能会有一些问题会导致安装失败,所以我这里先下载这个文件,将文件中的quay.io域名替换成七牛的国内镜像。
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml sed -i s#quay.io#quay-mirror.qiniu.com# kube-flannel.yml kubectl apply -f kube-flannel.yml
|-------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 | wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml sed -i s#quay.io#quay-mirror.qiniu.com# kube-flannel.yml kubectl apply -f kube-flannel.yml |
**注意:**即使更改了七牛镜像,也有可能因为某种原因失败,这里附上另外一种解决办法
先确认当前kube-flannel.yml里的quay.io/coreos/flannel:v0.12.0-amd64版本号,然后去https://github.com/coreos/flannel/releases上下载对应的包,我这里是0.12.0版本,即下载https://github.com/coreos/flannel/releases/download/v0.12.0/flanneld-v0.12.0-amd64.docker上传到master服务器上.
通过docker load命令导入镜像
docker load < flanneld-v0.12.0-amd64.docker kubectl apply -f kube-flannel.yml
|-----|--------------------------------------------------------------------------------| | 1 2 | docker load < flanneld-v0.12.0-amd64.docker kubectl apply -f kube-flannel.yml |
重启服务
systemctl restart kubelet docker
|---|----------------------------------| | 1 | systemctl restart kubelet docker |
稍等一会查看
kubectl get pods -n kube-system
|---|---------------------------------| | 1 | kubectl get pods -n kube-system |
此时kubectl get nodes也显示Ready
12、查看kube-system名称空间集群的状态
kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7ff77c879f-6sght 1/1 Running 0 13m coredns-7ff77c879f-crpt7 1/1 Running 0 13m etcd-master 1/1 Running 0 13m kube-apiserver-master 1/1 Running 0 13m kube-controller-manager-master 1/1 Running 1 13m kube-flannel-ds-amd64-94vpq 1/1 Running 0 3m37s kube-proxy-pvmt2 1/1 Running 0 13m kube-scheduler-master 1/1 Running 1 13m
|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 5 6 7 8 9 10 | kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7ff77c879f-6sght 1/1 Running 0 13m coredns-7ff77c879f-crpt7 1/1 Running 0 13m etcd-master 1/1 Running 0 13m kube-apiserver-master 1/1 Running 0 13m kube-controller-manager-master 1/1 Running 1 13m kube-flannel-ds-amd64-94vpq 1/1 Running 0 3m37s kube-proxy-pvmt2 1/1 Running 0 13m kube-scheduler-master 1/1 Running 1 13m |
此时master初始化完成,接下来配置node01、node02节点将其加入进来即可。
node01、node02主机配置(以下命令均在node01、node02主机上运行)
1、安装docker
yum install yum-utils device-mapper-persistent-data lvm2 -y yum install docker-ce -y systemctl enable docker
|-------|--------------------------------------------------------------------------------------------------------------| | 1 2 3 | yum install yum-utils device-mapper-persistent-data lvm2 -y yum install docker-ce -y systemctl enable docker |
配置docker加速,我这里以daocloud为例
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io systemctl restart docker
|-----|--------------------------------------------------------------------------------------------------------------------------| | 1 2 | curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io systemctl restart docker |
2、安装kubelet、kubeadm
在节点上可以不用安装kubectl,根据使用情况考虑是否需要安装,此处不安装
yum install kubelet-1.18.3 kubeadm-1.18.3 -y
|---|----------------------------------------------| | 1 | yum install kubelet-1.18.3 kubeadm-1.18.3 -y |
配置docker.service
编辑/usr/lib/systemd/system/docker.service增加--exec-opt native.cgroupdriver=systemd
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
|---|-------------------------------------------------------------------------------------------------------------------------| | 1 | ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd |
systemctl daemon-reload systemctl restart docker
|-----|--------------------------------------------------| | 1 2 | systemctl daemon-reload systemctl restart docker |
3、设置kubelet开机自启动
systemctl enable kubelet
|---|--------------------------| | 1 | systemctl enable kubelet |
4、设置忽略swap(如果当前未使用swap可忽略此步骤)
sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--fail-swap-on=false"/' /etc/sysconfig/kubelet
|---|--------------------------------------------------------------------------------------------------| | 1 | sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--fail-swap-on=false"/' /etc/sysconfig/kubelet |
5、加入k8s集群
注意:如果当前使用了swap,需要追加个--ignore-preflight-errors=Swap参数
kubeadm join 192.168.50.140:6443 --token fzv2cb.cdlgd2b4yivwpued --discovery-token-ca-cert-hash sha256:d537f87c47c9015fea3a708571dbdc5c27d921ef3e826ff67dbc0ed6d49280e4 --ignore-preflight-errors=Swap
|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | kubeadm join 192.168.50.140:6443 --token fzv2cb.cdlgd2b4yivwpued --discovery-token-ca-cert-hash sha256:d537f87c47c9015fea3a708571dbdc5c27d921ef3e826ff67dbc0ed6d49280e4 --ignore-preflight-errors=Swap |
加入后会提示
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 5 | This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. |
成功加入后,耐心等待一会,在master主机上执行kubectl get nodes命令,来查看节点状态
注意:如果加入后没有Ready,请参考上面kube-flannel.yml的另外一种解决方案,在node节点上手动导入flanneld-v0.12.0-amd64.docker
NAME STATUS ROLES AGE VERSION master Ready master 10m v1.18.3 node01 Ready <none> 5m1s v1.18.3 node02 Ready <none> 47s v1.18.3
|---------|------------------------------------------------------------------------------------------------------------------------------------| | 1 2 3 4 | NAME STATUS ROLES AGE VERSION master Ready master 10m v1.18.3 node01 Ready <none> 5m1s v1.18.3 node02 Ready <none> 47s v1.18.3 |