不忘初心,
牢记使命。

Docker&K8s---通过kubeadm快速部署K8s

2021-07-07 大聪明 0评论 104 0喜欢

Docker&K8s---通过kubeadm快速部署K8s

环境准备

三台虚拟机Centos8,选择最小安装,勾选网络工具、系统工具和标准安装即可。

最低配置要求:2C/2G/20G

IP: 192.168.12.10 Master 192.168.12.11 node1 192.168.12.12 node2

环境初始化

虚拟机网络模式选nat模式,

windows配置vmnet8的ipv4

# ip
192.168.12.1
# 掩码
255.255.255.0
# 首选DNS
192.168.12.254

虚拟机netvm8设置

# IP
192.168.12.0
# 掩码
255.255.255.0
# 网关
192.168.12.254

Centos8网络ip配置、主机名、epel源、常用工具安装

三台机器全部执行

# 关闭selinux
~]# vi /etc/selinux/config
SELINUX=disabled
# 设置网络ip
~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
ONBOOT=yes  # 原有内容上修改
BOOTPROTO=static  # 原有内容上修改
# 下边内容在后边追加
IPADDR=192.168.12.10   # 11 12
NETMASK=255.255.255.0
GATEWAY=192.168.12.254
DNS1=192.168.12.254
~]# systemctl restart NetworkManager
~]# reboot
~]# getenforce
Disabled
~]# ping baidu.com
# 设置主机名
~]# hostnamectl set-hostname k8s-master
~]# hostnamectl set-hostname k8s-node1
~]# hostnamectl set-hostname k8s-node2
# 关闭swap分区
~]# swapoff -a # 临时
~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久
# 三台机器上执行
~]# cat >> /etc/hosts << EOF192.168.12.10 k8s-master192.168.12.11 k8s-node01192.168.12.12 k8s-node02EOF
# 重启
~]# reboot
~]# systemctl stop firewalld
~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
~]# yum clean all
~]# yum makecache
~]# yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y

开始安装

安装docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install docker-ce -y
daemon.json
{
    "graph": "/data/docker",
    "storage-driver": "overlay2",
    "insecure-registries": ["registry.access.redhat.com", "quay.io"],
    "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com/"],
    "bip": "172.12.10.1/24",  # 后边分配的ip要对应,这个ip设置最好中间两段取本机ip的最后两段
    "exec-opts": ["native.cgroupdriver=systemd"],
    "live-restore":true
}

systemctl start docker
systemctl enable docker


# 卸载docker,备用
yum remove docker-ce.x86_64 ddocker-ce-cli.x86_64 -y
rm -rf /var/lib/docker

添加kubernetes的yum软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm,kubelet和kubectl

三台机器上执行,这里指定了版本v1.15.0

yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
systemctl enable kubelet

部署Kubernetes Master

在Master 节点执行,这里的apiserve需要修改成自己的master地址

[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=192.168.12.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.15.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=172.12.0.0/16



To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.12.10:6443 --token p6hvb3.5sln5g4k32wcrvn2 \
    --discovery-token-ca-cert-hash sha256:4d96240030c015b2e146c5ee2e4db4a40b2ff5bd55040b2768388a052d6c3613 

# 下边按照提示执行即可
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果提示镜像拉取超时,则选择国内的源先docker拉取,然后打上对应的标签

kubeadm config images list   # 查看镜像的标签版本


# 拉去国内相应的版本
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/kube-apiserver:v1.21.2
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/kube-controller-manager:v1.13.1
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/kube-proxy:v1.13.1
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/kube-scheduler:v1.13.1
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/etcd:3.2.24
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/pause:3.1
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/coredns:1.2.6


# 打标签,和kubeadm config查看的标签要一致
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/pause:3.1 k8s.gcr.io/pause:3.1
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

节点加入集群

在两台node上执行

[root@k8s-node01 ~]# kubeadm join 192.168.12.10:6443 --token p6hvb3.5sln5g4k32wcrvn2 \
    --discovery-token-ca-cert-hash sha256:4d96240030c015b2e146c5ee2e4db4a40b2ff5bd55040b2768388a052d6c3613

安装网络插件

安装网络插件:

# 记得修改你的cidr和模式
#  net-conf.json: |
#    {
#      "Network": "172.12.0.0/16",
#      "Backend": {
#        "Type": "host-gw"
#      }
#    }

# github老是访问不了,大家懂的。。。。自己做了个镜像
kubectl apply -f http://mirrors.liboer.top/kube-flannel.yaml

安装flannel时会报错,查了n久,各种版本都尝试了,最后发现是我的centos的iptable_nat的问题。。。。。。。

解决方法:

三台机器全部执行

~]# modinfo iptable_nat
filename:       /lib/modules/4.18.0-305.3.1.el8.x86_64/kernel/net/ipv4/netfilter/iptable_nat.ko.xz
license:        GPL
rhelversion:    8.4
srcversion:     98725EFA1CB8A67AC0BE0BD
depends:        ip_tables,nf_nat
intree:         Y
name:           iptable_nat
vermagic:       4.18.0-305.3.1.el8.x86_64 SMP mod_unload modversions 
sig_id:         PKCS#7
signer:         CentOS kernel signing key
sig_key:        1B:76:0B:00:B4:46:42:E5:5A:5D:E3:52:84:E5:35:67:94:50:0B:72
sig_hashalgo:   sha256
signature:      9A:02:50:27:3E:CF:F1:48:E8:18:E8:2E:43:6A:54:EF:6D:1C:80:8B:
        A9:9D:51:59:31:80:F3:D1:5F:90:A9:80:AC:63:EC:34:6D:3A:66:8C:
        69:FF:21:A5:B3:68:F9:F7:37:7E:31:41:42:E1:78:2F:1F:E8:91:5B:
        65:6C:AD:FE:75:38:29:31:0B:81:36:C9:D9:0C:3C:40:13:9E:D1:2D:
        46:23:A6:36:27:F7:29:08:25:0A:6A:86:81:9F:27:69:1E:3E:FD:EC:
        F5:EF:69:57:E9:4B:46:EE:1F:D9:69:B0:E2:8A:E5:6D:59:E7:19:67:
        1A:F7:7D:9C:59:0E:80:FA:86:DA:93:64:45:83:47:2C:A1:A4:0F:7B:
        FB:BF:E7:04:05:4B:5D:8B:F2:F4:CA:5E:2A:20:E4:8E:70:F0:B2:63:
        BF:8A:F0:9D:BE:41:5E:B1:E8:65:EE:C6:B4:DD:AE:91:F7:62:B2:E1:
        F2:39:7F:DA:E8:C8:C3:81:36:B0:64:81:ED:E5:B2:BA:A9:F8:EC:C6:
        E2:34:13:DA:09:22:14:45:F0:87:03:13:BB:56:09:66:F4:48:3B:7F:
        39:FF:F8:29:84:58:1C:0A:6B:37:34:0E:3E:CB:9E:DF:78:E0:7D:AC:
        F0:38:11:2B:C2:C7:A4:C8:01:2F:A3:9B:31:DC:C5:16:C2:44:B3:80:
        5D:A9:52:14:AE:2F:E9:F1:22:BA:AE:93:2E:9D:DB:3D:49:3A:53:59:
        CA:E7:97:BA:61:47:9E:36:C5:FA:B5:E1:BE:6F:1E:58:D6:55:78:FE:
        B2:28:B0:54:A5:B6:E5:4B:01:3F:5F:F4:87:E4:6B:2F:5C:69:8F:51:
        C1:CE:D2:D2:D1:B6:C0:FA:26:9F:1F:D4:F8:BE:B6:CD:30:21:C8:AE:
        DB:C6:43:DC:14:44:DA:67:12:D9:8F:05:EA:C3:A3:70:82:3E:B5:7A:
        C8:89:38:61:42:FE:B6:AE:61:45:02:37:28:16:C4:DC:6B:A7:F7:59:
        D0:E3:C1:02
~]# insmod /lib/modules/4.18.0-305.3.1.el8.x86_64/kernel/net/ipv4/netfilter/iptable_nat.ko.xz
insmod: ERROR: could not insert module /lib/modules/4.18.0-305.3.1.el8.x86_64/kernel/net/ipv4/netfilter/iptable_nat.ko.xz: Unknown symbol in module
~]# modprobe iptable_nat

查看状态,全是running即可。之前没有安装cni插件时node的状态时NotReady,coredns-bccdc95cf-cgj2m这俩pod是pending状态,装上就会ready和running

[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-cgj2m              1/1     Running   0          19m
coredns-bccdc95cf-shkmr              1/1     Running   0          19m
etcd-k8s-master                      1/1     Running   0          19m
kube-apiserver-k8s-master            1/1     Running   0          18m
kube-controller-manager-k8s-master   1/1     Running   0          18m
kube-flannel-ds-7dmd6                1/1     Running   0          30s
kube-flannel-ds-gdnbw                1/1     Running   0          30s
kube-flannel-ds-x72ts                1/1     Running   0          30s
kube-proxy-kd79h                     1/1     Running   0          19m
kube-proxy-mh2cn                     1/1     Running   0          18m
kube-proxy-z58qt                     1/1     Running   0          18m
kube-scheduler-k8s-master            1/1     Running   0          18m

检查一下:

[root@k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   55m   v1.15.0
k8s-node01   Ready    <none>   53m   v1.15.0
k8s-node02   Ready    <none>   53m   v1.15.0

测试Kubernetes集群

在Kubernetes集群中创建一个pod,然后暴露端口,验证是否正常访问:

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

[root@k8s-master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-554b9c67f9-jbch5   1/1     Running   0          2m26s
# 如果出问题可以查看详情或者日志
kubectl describe pod nginx-554b9c67f9-jbch5  # 详情
kubectl logs nginx-554b9c67f9-jbch5 -n namespace  # default可以不写后边的-n

[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

[root@k8s-master ~]# kubectl get pods,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-554b9c67f9-jbch5   1/1     Running   0          14m

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1      <none>        443/TCP        52m
service/nginx        NodePort    10.1.132.56   <none>        80:30824/TCP   9m48s

http://192.168.12.12:30824
# 如果浏览器不能访问,执行下边这句话(三台机器),这是因为新版的docker对iptables做了改动
iptables -P FORWARD ACCEPT

[root@k8s-master ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
nginx-554b9c67f9-jbch5   1/1     Running   0          28m   172.12.1.2   k8s-node02   <none>           <none>
# 直接curl集群内部地址也能看到nginx
curl 172.12.1.2

访问地址:http://NodeIP:Port ,此例就是

http://192.168.12.10:30824 http://192.168.12.11:30824 http://192.168.12.12:30824 任何一个都能访问到

img

卸载

希望你不会用到

出问题了,可以直接卸载掉重新按流程安装

kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum clean all
yum remove kube*

发表评论 取消回复

电子邮件地址不会被公开。

请输入正确格式的qq邮箱
请输入以http或https开头的URL,格式如:https://libo_sober.top