~阿弥陀佛~

人生能有多少天是按照自己的想法去活着

K8S集群安装2

kube-proxy开启ipvs的前置条件

modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash     

modprobe — ip_vs

modprobe — ip_vs_rr

modprobe — ip_vs_wrr

modprobe — ip_vs_sh

modprobe — nf_conntrack_ipv4

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

 

【此处报错modprobe: FATAL: Module nf_conntrack_ipv4 not found

linux kernel 4.19版本已经将nf_conntrack_ipv4 更新为 nf_conntrack kube-proxy 1.13 以下版本,强依赖 nf_conntrack_ipv4

modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

modprobe — ip_vs

modprobe — ip_vs_rr

modprobe — ip_vs_wrr

modprobe — ip_vs_sh

modprobe — nf_conntrack

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager \

–add-repo \

http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum update -y && yum install -y docker-ce

##创建 /etc/docker目录

mkdir /etc/docker

#配置 daemon.

cat > /etc/docker/daemon.json <<EOF

{

"exec-opts": ["native.cgroupdriver=systemd"],

"log-driver": "json-file",

"log-opts": {

"max-size": "100m"

}

}

EOF

mkdir -p /etc/systemd/system/docker.service.d

#重启docker服务

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

 

安装kubeadm主从配置

cat <<EOF > /etc/yum.repos.d/kubernetes.repo    

[kubernetes]   

name=Kubernetes  

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0     

repo_gpgcheck=0

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1

systemctl enable kubelet.service

 

初始化主节点

kubeadm config print init-defaults > kubeadm-config.yaml    (含义: 显示init默认的初始化文件打印到yaml文件中, 红色需修改的区域,紫色添加到文件最后改为ipvs的调度方式)

localAPIEndpoint:

advertiseAddress: 192.168.66.10

kubernetesVersion: v1.15.1

networking:

podSubnet: "10.244.0.0/16"    (必须是该网段,因为默认情况下要安装一个flannel的网络插件实现覆盖性的网络,它的默认的网段就这个,如果不一致需要后续修改配置文件)

serviceSubnet: 10.96.0.0/12

apiVersion: kubeproxy.config.k8s.io/v1alpha1

kind: KubeProxyConfiguration

featureGates:

SupportIPVSProxyMode: true

mode: ipvs

kubeadm init –config=kubeadm-config.yaml –experimental-upload-certs | tee kubeadm-init.log     –config指定yaml文件初始化安装, –exp自动颁发证书1.13版后才有, tee把所有信息写入log文件)

  【日志文件释义:

[init] Using Kubernetes version: v1.15.1

[init] Using Kubernetes version: v1.15.1

[preflight] Running pre-flight checks

检测运行环境

[preflight] Pulling images required for setting up a Kubernetes cluster

k8s集群下载镜像

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

kubelet的环境变量文件路径

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

kubelet的配置文件路径

[kubelet-start] Activating the kubelet service

[certs] Using certificateDir folder "/etc/kubernetes/pki"

k8s所有的证书 因为k8s是采用http协议进行cs结构的开发,为了安全性考虑 在所有主节点通讯时采用了https双向认证的方案,所以它需要大量的CA证书私钥密钥

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "apiserver" certificate and key

设置DNS以及当前默认的域名

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [master-1 localhost] and IPs [192.168.66.10 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [master-1 localhost] and IPs [192.168.66.10 127.0.0.1 ::1]

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "sa" key and public key

密钥的生成

 

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

Then you can join any number of worker nodes by running the following on each as root:

 

kubeadm join 192.168.66.10:6443 –token abcdef.0123456789abcdef \

–discovery-token-ca-cert-hash sha256:70492f95370df7e7d8cd1822894b739aad799a566c2a97d8fffb3fe9693cf0f4

 

  mkdir -p $HOME/.kube  (这里会保存连接配置,kubectlkubeapi去交互,采用https协议,因此会有缓存和认证文件config到这个目录下)

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config  (将属主属组授予当前用户)

[root@master-1 ~]# kubectl get node

NAME       STATUS     ROLES    AGE   VERSION

master-1   NotReady   master   17m   v1.15.1

notready状态,因为k8s需要一个扁平化的网络,目前还没有构建flannel

 

mkdir -p /root/k8s-install/core/plugin/flannel

cd  /root/k8s-install/core/plugin/flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

添加节点

kubeadm join 192.168.66.10:6443 –token abcdef.0123456789abcdef \

    –discovery-token-ca-cert-hash sha256:70492f95370df7e7d8cd1822894b739aad799a566c2a97d8fffb3fe9693cf0f4

Next Post

Previous Post

1 Comment

  1. Scamnet 2021年10月1日

    It’s a proper cult at this point. Unable to admit even the slightest failure. Proper god complex with everyone in there.

Leave a Reply

© 2021 ~阿弥陀佛~

ICP备案号: 京ICP备19038630号

版权所有@ 王力翔

耗时 0.220 秒 | 查询 45 次 | 内存 22.44 MB