Kubernetes
本文以二进制的方式安装Kubernetes集群,从中我们可以更好的理解Kubernetes各核心进程是如何进行协调配合的。
其中在Master节点上我们需要部署的有:etcd,kube-apiserver,kube-controller-manager,kube-scheduler服务进程,而Node节点上需要部署docker,kubelet,kube-proxy服务进程。
二进制包下载地址: https://github.com/kubernetes/kubernetes/releases
环境准备:
Master: 10.211.55.16
Node: 10.211.55.17,10.211.55.18
Master
1.etcd服务
1 | ~]# wget -c https://github.com/coreos/etcd/releases/download/v3.2.9/etcd-v3.2.9-linux-amd64.tar.gz |
2 | ~]# tar xf etcd-v3.2.9-linux-amd64.tar.gz |
3 | ~]# cp etcd-v3.2.9-linux-amd64/etcd* /usr/bin/ |
4 | ~]# vim /usr/lib/systemd/system/etcd.service |
5 | [Unit] |
6 | Description=Etcd Server |
7 | After=network.target |
8 | [Service] |
9 | Type=notify |
10 | ExecStart=/usr/bin/etcd --name=kubernets --data-dir=/var/lib/k8s/ --listen-client-urls=http://10.211.55.16:2379,http://127.0.0.1:2379 --listen-peer-urls=http://10.211.55.16:2380 --advertise-client-urls=http://10.211.55.16:2379 --initial-cluster-token=etcd-cluster --initial-cluster-state=new |
11 | Restart=on-failure |
12 | LimitNOFILE=65536 |
13 | [Install] |
14 | WantedBy=multi-user.target |
15 | ~]# systemctl daemon-reload |
16 | ~]# systemctl start etcd |
17 | ~]# etcdctl cluster-health |
18 | member 8e9e05c52164694d is healthy: got healthy result from http://10.211.55.16:2379 |
19 | cluster is healthy |
2.kube-apiserver服务
1 | ~]# vim /usr/lib/systemd/system/kube-apiserver.service |
2 | [Unit] |
3 | Description=Kubernetes API Server |
4 | After=etcd.service |
5 | Wanted=etcd.service |
6 | |
7 | |
8 | [Service] |
9 | EnvironmentFile=/etc/k8s/apiserver |
10 | ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS |
11 | Restart=on-failure |
12 | Type=notify |
13 | |
14 | [Install] |
15 | WantedBy=multi-user.target |
16 | ~]# vim /etc/k8s/apiserver |
17 | KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=10.10.10.0/24 --service-node-port-range=1-65535 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,DefaultStorageClass,ResourceQuota --logtostderr=true --log-dir=/var/log/k8s --v=2" |
18 | ~]# systemctl daemon-reload |
19 | ~]# systemctl start kube-apiserver |
3.kube-controller-manager服务
1 | ~]# vim /usr/lib/systemd/system/kube-controller-manager.service |
2 | [Unit] |
3 | Description=Kubernets Controller Manager |
4 | Adter=kube-apiserver.service |
5 | Requires=kube-apiserver.service |
6 | |
7 | |
8 | [Service] |
9 | EnvironmentFile=-/etc/k8s/controller-manager |
10 | ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS |
11 | Restart=on-failure |
12 | |
13 | [Install] |
14 | WantedBy=multi-user.target |
15 | ~]# vim /etc/k8s/controller-manager |
16 | KUBE_CONTROLLER_MANAGER_ARGS='--master=http://127.0.0.1:8080 --logtostderr=true --log-dir=/var/log/k8s --v=2' |
17 | ~]# systemctl daemon-reload |
18 | ~]# systemctl start kube-controller-manager |
4.kube-scheduler服务
1 | ~]# vim /usr/lib/systemd/system/kube-scheduler.service |
2 | [Unit] |
3 | Description=Kubernetes Scheduler |
4 | After=kube-apiserver.service |
5 | Requires=kube-apiserver.service |
6 | |
7 | |
8 | [Service] |
9 | EnvironmentFile=-/etc/k8s/scheduler |
10 | ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS |
11 | Restart=on-failure |
12 | |
13 | [Install] |
14 | WantedBy=multi-user.target |
15 | ~]# vim /etc/k8s/scheduler |
16 | KUBE_SCHEDULER_ARGS="--master=http://127.0.0.1:8080 --logtostderr=true --log-dir=/var/log/k8s --v=2" |
17 | ~]# systemctl daemon-reload |
18 | ~]# systemctl start kube-scheduler |
Node
1.docker服务
1 | ~]# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun |
2.kubelet服务
1 | ~]# vim /usr/lib/systemd/system/kubelet.service |
2 | [Unit] |
3 | Description=Kubernetes Kubelet Server |
4 | After=docer.service |
5 | Requires=docker.service |
6 | |
7 | |
8 | [Service] |
9 | WorkingDirectory=/var/lib/kubelet |
10 | EnvironmentFile=/etc/k8s/kubelet |
11 | ExecStart=/usr/bin/kubelet $KUBELET_ARGS |
12 | Restart=on-failure |
13 | |
14 | [Install] |
15 | WantedBy=multi-user.target |
16 | ~]# vim /etc/k8s/kubelet |
17 | KUBELET_ARGS="--address=10.211.55.17 --port=10250 --hostname-override=k8s-node1 --allow-privileged=false --kubeconfig=/etc/k8s/kubelet.kubeconfig --cluster-dns=10.211.55.1 --cluster-domain=cluster.local --fail-swap-on=false --logtostderr=true --log-dir=/var/log/kubernetes --v=4" |
18 | ~}# vim /etc/k8s/kubelet.kubeconfig |
19 | apiVersion: v1 |
20 | kind: Config |
21 | clusters: |
22 | - cluster: |
23 | server: http://10.211.55.16:8080 |
24 | name: local |
25 | contexts: |
26 | - context: |
27 | cluster: local |
28 | name: local |
29 | current-context: local |
30 | ~]# systemctl daemon-reload |
31 | ~]# systemctl start kubelet |
3.kube-proxy
1 | ~]# vim /usr/lib/systemd/system/kube-proxy.service |
2 | [Unit] |
3 | Description=Kubernetes Kube-proxy Server |
4 | After=network.service |
5 | Requires=network.service |
6 | |
7 | |
8 | [Service] |
9 | EnvironmentFile=/etc/k8s/proxy |
10 | ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS |
11 | Restart=on-failure |
12 | |
13 | [Install] |
14 | WantedBy=multi-user.target |
15 | ~]# vim /etc/k8s/proxy |
16 | KUBE_PROXY_ARGS='--master=http://10.211.55.16:8080 --hostname-override=k8s-node1 --logtostderr=true --log-dir=/var/log/k8s --v=4' |
17 | ~]# systemctl daemon-reload |
18 | ~}# systemctl start kube-proxy |
测试
pause根容器需要提前准备好,需要翻墙,我们可以怎么做:
1 | docker pull cloudnil/pause-amd64:3.0 |
2 | docker tag cloudnil/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0 |
检查:
1 | ~]# kubectl get nodes |
2 | NAME STATUS ROLES AGE VERSION |
3 | k8s-node1 Ready <none> 2h v1.8.11 |
4 | k8s-node2 Ready <none> 2h v1.8.11 |
5 | ~]# kubectl get svc |
6 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE |
7 | kubernetes ClusterIP 10.10.10.1 <none> 443/TCP 2h |
8 | ~]# kubectl get cs |
9 | NAME STATUS MESSAGE ERROR |
10 | controller-manager Healthy ok |
11 | etcd-0 Healthy {"health": "true"} |
12 | scheduler Healthy ok |