shell,container,docker,Linux,Kubernetes

CentOS 7.6上简单部署Kubernetes 1.14.0集群

  2019年的第一个Kubernetes版本, 1.14.0发布已有近两周, 遂进行部署一个master, 两个worker节点的简单集群体验新功能

部署环境

  • 为方便操作, 此过程中所有操作皆以root用户进行, 大部分操作在master节点进行, 需要到每个节点操作的有额外说明
  • 测试环境基础配置:

    • 所有节点均为CentOS 7.6, 无swap分区
HostIPHostnameHardware envSoftware
master01172.22.35.15master.s4lm0x.com4Core 8Gdocker, kubelet, kubeadm, kubectl
node01172.22.35.16node01.s4lm0x.com4Core 4Gdocker, kubelet, kubeadm
node02172.22.35.17node02.s4lm0x.com4Core 4Gdocker, kubelet, kubeadm
  • 关闭不必要的服务, 所有节点都执行

    systemctl disable --now firewalld
    setenforce 0
    sed -i 's@SELINUX=enforcing@SELINUX=disabled@' /etc/selinux/config
    systemctl disable --now NetworkManager
    systemctl disable --now dnsmasq
  • mater与两个worker之间配置公钥认证, 以便文件复制

    ssh-keygen -t rsa
    ssh-copy-id -i .ssh/id_rsa.pub node01
    ssh-copy-id -i .ssh/id_rsa.pub node02

yum源配置

curl https://mirrors.aliyun.com/repo/epel-7.repo -o /etc/yum.repos.d/
curl https://mirrors.aliyun.com/repo/Centos-7.repo -o /etc/yum.repos.d/

cat << EOF | tee /etc/yum.repos.d/Kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

cat << EOF | tee /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF

cat << EOF | tee /etc/yum.repos.d/crio.repo
[crio-311-candidate]
name=added from: https://cbs.centos.org/repos/paas7-crio-311-candidate/x86_64/os/
baseurl=https://cbs.centos.org/repos/paas7-crio-311-candidate/x86_64/os/
enabled=1
gpgcheck=0
EOF

ssh node01 'rm -f /etc/yum.repos.d/*'
ssh node02 'rm -f /etc/yum.repos.d/*'
scp /etc/yum.repos.d/*.repo node01:/etc/yum.repos.d/
scp /etc/yum.repos.d/*.repo node02:/etc/yum.repos.d/
yum clean all; ssh node01 'yum clean all'; ssh node02 'yum clean all'
yum repolist; ssh node01 'yum repolist'; ssh node02 'yum repolist'

时钟服务器

  • 三个节点安装chrony, master节点为ntp服务器
yum install chrony -y
sed -i 's@server 0.centos.pool.ntp.org iburst@server 172.22.35.15 iburst@' /etc/chrony.conf
sed -ir '/server [0-9].centos.pool.ntp.org iburst/d' /etc/chrony.conf
scp /etc/chrony.conf node02:/etc/
scp /etc/chrony.conf node01:/etc/
sed -i 's@#local stratum 10@local stratum 10@' /etc/chrony.conf
sed -i 's@#allow 192.168.0.0/16@allow 172.22.35.0/24@' /etc/chrony.conf
systemctl restart chronyd

主机名解析

  • 因仅有三个虚拟机, 故域名解析用hosts即可
172.22.35.15    master.s4lm0x.com    master
172.22.35.16    node01.s4lm0x.com    node01
172.22.35.17    node02.s4lm0x.com    node02

安装相关程序包

  • 所有节点均操作
yum install wget git jq psmisc socat -y
yum install -y yum-utils device-mapper-persistent-data lvm2 cri-o
yum install ipvsadm ipset sysstat conntrack libseccomp -y
yum update -y --exclude=kernel*
  • 升级内核, 并调整开机默认使用的内核
export Kernel_Version=5.0.5-1
wget http://mirror.rc.usf.edu/compute_lock/elrepo/kernel/el7/x86_64/RPMS/kernel-ml{,-devel}-${Kernel_Version}.el7.elrepo.x86_64.rpm
yum localinstall -y kernel-ml*
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --default-kernel
reboot
  • 开启ipvs
cat << EOF | tee /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
    /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
   if [ $? -eq 0 ]; then
        /sbin/modprobe \${kernel_module}
   fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

所有机器需要设定/etc/sysctl.d/k8s.conf的系统参数

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_forward = 1
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.netfilter.nf_conntrack_max = 2310720
fs.inotify.max_user_watches = 89100
fs.may_detach_mounts = 1
fs.file-max = 52706963
fs.nr_open = 52706963
net.bridge.bridge-nf-call-arptables = 1
vm.swappiness = 0
vm.overcommit_memory = 1
vm.panic_on_oom=0
net.ipv4.tcp_fastopen = 3
EOF

scp /etc/sysctl.d/k8s.conf node01:/etc/sysctl.d/
scp /etc/sysctl.d/k8s.conf node02:/etc/sysctl.d/

sysctl --system

安装docker, kubelet, kubeadm, kubectl

  • master节点安装docker, kubelet, kubeadm, kubectl
  • worker节点安装docker, kubelet, kubeadm

    curl -fsSL "https://get.docker.com/" | bash -s -- --mirror Aliyun
  • 若要使用默认的k8s.gcr.io镜像仓库拉取Kubernetes系统组件的相关镜像, 需要配置docker Unit File(/usr/lib/systemd/system/docker.service文件)中的Environment变量, 为其在[Service]片段中定义可用的HTTPS_PROXY, 格式如下

    Environment="HTTPS_PROXY=PROTOCOL://HOST:PORT"
    Environment="NO_PROXY=172.29.0.0/16,127.0.0.0/8"
  • docker自1.13版起会自动设置iptablesFORWARD默认策略为DROP, 这可能会影响Kubernetes集群依赖的报文转发功能, 需要在docker服务启动后, 重新将FORWARD链的默认策略设为ACCEPT, 同样是需要修改docker Unit File, 在ExecStart=/usr/bin/dockerd -H一行之后新增一行

    ExecStartPost=/sbin/iptables -P FORWARD ACCEPT
  • 配置完成后, 重载docker daemon

    systemctl daemon-reload
    systemctl restart docker
    systemctl enable docker
yum install -y kubelet kubeadm kubectl 
systemctl enable kubelet

初始化

master

  • 初始化的方式有两种, 一是通过命令行选项传递关键的部署设定, 另一个是基于yaml格式的专用配置文件, 后一种允许用户自定义各个部署参数

    • 命令行选项传递关键的部署设定方式
    kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=SystemVerification --kubernetes-version=v1.14.0 --apiserver-advertise-address=172.22.35.15
- 基于yaml格式的专用配置文件初始化方式
mkdir ~/manifest
cd ~/manifest
cat << EOF | tee kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
controlPlaneEndpoint: 172.22.35.15:6443
imageRepository: k8s.gcr.io
kubeProxy:
  config:
    mode: "ipvs"
    ipvs:
      ExcludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
dns:
  type: CoreDNS
kubeletConfiguration:
  baseConfig:
    cgroupDriver: cgroupfs
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local
    failSwapOn: false
    resolvConf: /etc/resolv.conf
    staticPodPath: /etc/kubernetes/manifests
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
EOF

kubeadm config images pull --config kubeadm-init.yaml

kubeadm init --config=kubeadm-init.yaml
  • 初始化完成后,将看到成功初始化的信息

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 172.22.35.15:6443 --token 0jisgi.qjkeugeuokb5hcte \
        --discovery-token-ca-cert-hash sha256:d26d2f74d2d23d80b2c8e3b4a6998dc937aac68c7a345126d3185649fa6a47b6
- 接下来按照以上提示进行操作即可, 并把worker节点介入到集群中的命令复制到`worker`节点中执行, 以便加入到集群中
- 如不慎,将`worker`节点加入集群的命令遗失, 可使用如下命令生成加入集群的命令
kubeadm token create --print-join-command
  • 复制认证为Kubernetes系统管理员的配置文件到root用户家目录

    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  • 重新生成加入集群的token

    kubeadm token create --print-join-command

网络插件

  • 添加fannel网络附件,并开启DirectRouting
curl -O https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
sed -i '/Backend/a\        "DirectRouting": true,' kube-flannel.yml
kubectl apply -f kube-flannel.yml

验正master节点是否已就绪

kubectl get nodes

worker

  • 两个worker节点分贝执行以下命令加入集群
kubeadm join 172.22.35.15:6443 --token 783bde.3f89s0fje9f38fhf \
    --discovery-token-ca-cert-hash sha256:8222c9e8186837c31148756e7fe94f97af0971dc331fce7af87663ff0ad73462
  • 一个master, 两个workerkubernetes集群已经部署完成, 可进行使用测试, 例如部署一个nginx, 并通过在集群外部进行访问

    kubectl create deployment nginx --image=nginx:1.14-alpine
    kubectl create service nodeport nginx --tcp=80:80
微信扫一扫,向我赞赏

微信扫一扫,向我赞赏

微信扫一扫,向我赞赏

支付宝扫一扫,向我赞赏

回复

This is just a placeholder img.