1.k8s生产集群安装
既然测试学习,还是用生产环境的搭建方式比较符合实际。为节约资源,一共安装两台主机,一台作为mater节点,另一台作为worker节点。
1.1 docker环境安装
1.安装必要依赖软件以添加gpg key
sudo apt-get update
sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg \ lsb-release
2.下载docker gpg key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
3.写入docker下载apt源
echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
4.更新apt源,安装docker
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
5.写入docker systemd配置文件
sudo mkdir /etc/docker cat <<EOF | sudo tee /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF
6.重启docker服务
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
1.2 安装kubeadm工具
1.添加Google Cloud签名key
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
2.写入Kubernets的apt源
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
3.更新源安装k8s必要工具
sudo apt-get update & apt-get install -y kubelet kubeadm kubectl & apt-mark hold kubelet kubeadm kubectl
1.3 初始化集群
1.关闭内存交换(所有节点执行)
sudo swapoff --all
2.由于k8s.gcr.io连接有点问题,导致init时有些依赖镜像无法下载,所以最好挂个代理。用完了记得删掉。(Master节点执行,配置代理的时候最好给worker节点也配置一下,worker节点运行app的时候也需要下载一些docker镜像)
vim /usr/lib/systemd/system/docker.service
[Service] Environment="HTTP_PROXY=socks5://192.168.79.1:1081" Environment="HTTPS_PROXY=socks5://192.168.79.1:1081"
systemctl daemon-reload systemctl restart docker
3.初始化master-node(Master节点执行)
kubeadm init --pod-network-cidr=10.10.0.1/16
4.创建配置文件文件夹(Master节点执行)
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
5.设置环境变量(Master节点执行)
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile source /etc/profile
6.创建一个Pod网络,这个pod网络插件有许多可以使用的。具体可以参考《K8s cluster NetWorking》
这里我们使用Flannel来创建一个Pod网络。(Master节点执行)
proxychains wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml sudo kubectl apply -f kube-flannel.yml
同时给node节点创建配置文件
mkdir -p /etc/cni/net.d vim /etc/cni/net.d/10-flannel.conflist
插入如下内容
{ "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] }
7.验证一下是否成功(Master节点执行)
root@master-node:/home/ubuntu# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-558bd4d5db-55jz9 1/1 Running 0 14h kube-system coredns-558bd4d5db-snvxd 1/1 Running 0 14h kube-system etcd-master-node 1/1 Running 0 14h kube-system kube-apiserver-master-node 1/1 Running 0 14h kube-system kube-controller-manager-master-node 1/1 Running 0 14h kube-system kube-flannel-ds-xm6zs 1/1 Running 0 77s kube-system kube-proxy-whr5s 1/1 Running 0 14h kube-system kube-scheduler-master-node 1/1 Running 0 14h
8.把worker节点加入集群(worker节点执行),这条命令是执行kubeadmin init
的时候输出里面带的。
kubeadm join 192.168.79.129:6443 --token ob7xti.4hnc1o17ie0cfhrc \ --discovery-token-ca-cert-hash sha256:f5e7863fa86b5645d022f9aff6f388d498827fec1c23c7a6a965898d324bbd37
9.master节点看一下是否加入成功
root@master-node:/home/ubuntu# kubectl get nodes NAME STATUS ROLES AGE VERSION master-node Ready control-plane,master 14h v1.21.3 worker-node Ready <none> 33s v1.21.3
2.k8s使用
2.1 部署一个应用
1.先建立好一个容器化的应用,然后将对应镜像上传至hub或者上传到worker节点。这里我没有建hub所以直接把镜像传到了worker节点。
root@worker-node:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE testapp v1.0 b624c7711edd 4 hours ago 138MB k8s.gcr.io/kube-proxy v1.21.3 adb2816ea823 12 days ago 103MB quay.io/coreos/flannel v0.14.0 8522d622299c 2 months ago 67.9MB k8s.gcr.io/pause 3.4.1 0f8457a4c2ec 6 months ago 683kB
2.在master-node中创建一个用于建立deployment的配置文件。K8S集群中的Deployment作用就是告诉集群如何创建应用程序,从哪个镜像中建立应用容器,应该建立几个等。文件内容如下:
kind: Deployment #对象类型 metadata: name: testapp #名称 labels: app: testapp #标签 spec: replicas: 3 #运行pods的数量 selector: matchLabels: app: testapp template: metadata: labels: app: testapp spec: containers: #docker容器的配置 - name: testapp image: testapp:v1.0 # pull镜像的地址 ,远程的类似192.168.79.129:9898/testapp:v1.0 imagePullPolicy: IfNotPresent #pull镜像时机,这个表示如果worker节点没有则pull ports: - containerPort: 80 #容器对外开放端口
建立deployment
root@master-node:/home/ubuntu# kubectl create -f createDeployment.conf deployment.apps/testapp created
此时对应pods就已经运行在了worker节点中。
root@master-node:/home/ubuntu# kubectl get pods NAME READY STATUS RESTARTS AGE testapp-777fc9989f-4lq28 1/1 Running 0 27s testapp-777fc9989f-7k6s4 1/1 Running 0 27s testapp-777fc9989f-vz6rg 1/1 Running 0 27s
为了可以访问到app,还需要建立对应的service,service的作用主要是告诉集群如何设置代理,让外部用户能够访问到pods。首先在master-node中新建一个用于建立service的配置文件,内容如下:
apiVersion: v1 kind: Service metadata: name: testapp namespace: default labels: app: testapp spec: type: NodePort #这个用来选择使用什么样的网络模式来将pods端口代理出来 ports: - port: 80 nodePort: 30080 #node节点对外开放端口 selector: app: testapp
创建对应service
root@master-node:/home/ubuntu# kubectl create -f createService.conf service/testapp created
root@worker-node:~# netstat -nultp | grep 30080 tcp 0 0 0.0.0.0:30080 0.0.0.0:* LISTEN 299245/kube-proxy
此时便可以访问worker节点的30080端口来访问内部的服务,这个叫kube-proxy的进程以类似于负载均衡的方式将请求转发至运行在docker中的container里。
2.2 应用扩容
1.先查看应用的容量
root@master-node:/home/ubuntu# kubectl get rs NAME DESIRED CURRENT READY AGE testapp-777fc9989f 3 3 3 14h
2.这里有三个pods,扩容至四个命令如下:
root@master-node:/home/ubuntu# kubectl scale deployments/testapp --replicas=4 deployment.apps/testapp scaled root@master-node:/home/ubuntu# kubectl get rs NAME DESIRED CURRENT READY AGE testapp-777fc9989f 4 4 4 14h
2.3 应用更新
假如现在有了新版本的app,则可以通过如下方法更新app。
root@worker-node:~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE testapp v2.0 e7467644ced2 52 minutes ago 138MB testapp v1.0 b624c7711edd 20 hours ago 138MB
root@master-node:/home/ubuntu# kubectl set image deployment/testapp testapp=testapp:v2.0 deployment.apps/testapp image updated