Jusene's Blog

Kubernetes v1.11.0 二进制手动安装手册

字数统计: 6.7k阅读时长: 40 min
2019/10/10 Share

二进制安装手册来源从网上搜索及个人实践所得,为热爱kubernetes的人员提供参考资料:

准备工作

  1. 时间同步
  2. 主机名修改
  3. 主机名与ip在/etc/hosts绑定
  4. ssh双机互信
  5. 关闭防火墙
  6. 关闭swap
  7. 关闭selinux

etcd集群安装

etcd服务器IP:

  • 192.168.14.203
  • 192.168.14.9
  • 192.168.14.34

cfssl工具准备

1
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
2
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
3
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo
4
chmod +x /usr/local/cfssl*

etcd证书创建

1
~]# cat ca-config.json
2
{
3
  "signing": {
4
    "default": {
5
      "expiry": "876000h"
6
    },
7
    "profiles": {
8
      "etcd": {
9
        "usages": [
10
            "signing",
11
            "key encipherment",
12
            "server auth",
13
            "client auth"
14
        ],
15
        "expiry": "876000h"
16
      }
17
    }
18
  }
19
}
1
~]# cat ca-csr.json
2
{
3
  "CN": "etcd",
4
  "key": {
5
    "algo": "rsa",
6
    "size": 2048
7
  },
8
  "names": [
9
    {
10
      "C": "CN",
11
      "ST": "zhejiang",
12
      "L": "hangzhou",
13
      "O": "etcd",
14
      "OU": "System"
15
    }
16
  ]
17
}
  • 生成CA证书
1
~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2
2019/10/08 14:10:38 [INFO] generating a new CA key and certificate from CSR
3
2019/10/08 14:10:38 [INFO] generate received request
4
2019/10/08 14:10:38 [INFO] received CSR
5
2019/10/08 14:10:38 [INFO] generating key: rsa-2048
6
2019/10/08 14:10:38 [INFO] encoded CSR
7
2019/10/08 14:10:38 [INFO] signed certificate with serial number 729202919502466530277041968346292222430741265108
1
~]# cat etcd-csr.json
2
{
3
  "CN": "etcd",
4
  "hosts": [
5
    "127.0.0.1",
6
    "192.168.14.203",
7
    "192.168.14.9",
8
    "192.168.14.34"
9
  ],
10
  "key": {
11
    "algo": "rsa",
12
    "size": 2048
13
  },
14
  "names": [
15
    {
16
      "C": "CN",
17
      "ST": "zhejiang",
18
      "L": "hangzhou",
19
      "O": "etcd",
20
      "OU": "System"
21
    }
22
  ]
23
}
  • 生成etcd证书
1
~]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd etcd-csr.json | cfssljson -bare etcd
2
2019/10/08 14:13:54 [INFO] generate received request
3
2019/10/08 14:13:54 [INFO] received CSR
4
2019/10/08 14:13:54 [INFO] generating key: rsa-2048
5
2019/10/08 14:13:54 [INFO] encoded CSR
6
2019/10/08 14:13:54 [INFO] signed certificate with serial number 725478311285159828781967151519197153159047282910
7
2019/10/08 14:13:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
8
websites. For more information see the Baseline Requirements for the Issuance and Management
9
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
10
specifically, section 10.2.3 ("Information Requirements").

etcd安装

  • 准备etcd TLS证书文件
1
~]# mkdir -p /etc/etcd/etcdSSL
2
~]# cp ca.pem etcd.pem etcd-key.pem /etc/etcd/etcdSSL/

etcd集群节点都需要准备一份

  • 安装etcd服务
1
~]# yum install -y etcd
2
~]# cat /usr/lib/systemd/system/etcd.service
3
[Unit]
4
Description=Etcd Server
5
After=network.target
6
After=network-online.target
7
Wants=network-online.target
8
9
[Service]
10
Type=notify
11
WorkingDirectory=/var/lib/etcd/
12
EnvironmentFile=-/etc/etcd/etcd.conf
13
User=etcd
14
# set GOMAXPROCS to number of processors
15
ExecStart=/usr/bin/etcd \
16
  --name ${ETCD_NAME} \
17
  --cert-file=/etc/etcd/etcdSSL/etcd.pem \
18
  --key-file=/etc/etcd/etcdSSL/etcd-key.pem \
19
  --peer-cert-file=/etc/etcd/etcdSSL/etcd.pem \
20
  --peer-key-file=/etc/etcd/etcdSSL/etcd-key.pem \
21
  --trusted-ca-file=/etc/etcd/etcdSSL/ca.pem \
22
  --peer-trusted-ca-file=/etc/etcd/etcdSSL/ca.pem \
23
  --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
24
  --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
25
  --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
26
  --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
27
  --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
28
  --initial-cluster=${ETCD_CLUSTER}\
29
  --initial-cluster-state new \
30
  --data-dir=${ETCD_DATA_DIR}
31
Restart=on-failure
32
LimitNOFILE=65536
33
34
[Install]
35
WantedBy=multi-user.target
  • 配置文件
1
~]# cat /etc/etcd/etcd.conf
2
# 这里需要修改
3
ETCD_NAME=etcd1
4
ETCD_DATA_DIR="/var/lib/etcd"
5
ETCD_LISTEN_PEER_URLS="https://192.168.14.9:2380"
6
ETCD_LISTEN_CLIENT_URLS="https://192.168.14.9:2379"
7
8
#[cluster]
9
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.14.9:2380"
10
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
11
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.14.9:2379"
12
ETCD_CLUSTER="etcd1=https://192.168.14.9:2380,etcd2=https://192.168.14.34:2380,etcd3=https://192.168.14.203:2380"
  • 启动节点
1
~]# chown -R etcd /etc/etcd/etcdSSL
2
~]# systemctl daemon-reload
3
~]# systemctl enable etcd
4
~]# systemctl start etcd

重复三个节点

验证etcd集群

1
~]# etcdctl --ca-file=/etc/etcd/etcdSSL/ca.pem --cert-file=/etc/etcd/etcdSSL/etcd.pem --key-file=/etc/etcd/etcdSSL/etcd-key.pem cluster-health
2
member 2f37f9544bd71c49 is healthy: got healthy result from https://192.168.14.203:2379
3
member 78ff39a4f11f0d65 is healthy: got healthy result from https://192.168.14.9:2379
4
member 945d4e6858099e77 is healthy: got healthy result from https://192.168.14.34:2379
5
cluster is healthy

Kubernetes Master节点安装

准备kubernetes集群所需要的TLS证书文件

部署kubernetes服务所需要使用的证书如下:

  1. 根证书公钥与私钥 ca.pem与ca.key
  2. API Server公钥与私钥 apiserver.pem与apiserver.key
  3. 集群管理员公钥与私钥 admin.pem与admin.key
  4. 节点proxy公钥与私钥
  5. 节点kubelet的公钥与私钥是通过bootstrap响应的方式,在启动kubelet自动会产生,然后在master通过csr请求,就会产生

创建CA证书

1
openssl genrsa -out ca.key 2048
2
openssl req -x509 -new -nodes -key ca.key -days 100000 -out ca.pem -subj "/CN=kubernetes/O=k8s"

创建apiserver证书

  • 创建openssl.cnf
1
[req]
2
req_extensions = v3_req
3
distinguished_name = req_distinguished_name
4
[req_distinguished_name]
5
[ v3_req ]
6
basicConstraints = CA:FALSE
7
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
8
subjectAltName = @alt_names
9
[alt_names]
10
DNS.1 = kubernetes
11
DNS.2 = kubernetes.default
12
DNS.3 = kubernetes.default.svc
13
DNS.4 = kubernetes.default.svc.cluster
14
DNS.5 = kubernetes.default.svc.cluster.local
15
DNS.6 = k8s_master
16
IP.1 = 10.0.6.1              # ClusterServiceIP 地址
17
IP.2 = 192.168.14.9           # master IP地址
18
IP.3 = 10.0.6.200            # kubernetes DNS IP地址
  • 生成apiserver证书
1
openssl genrsa -out apiserver.key 2048
2
openssl req -new -key apiserver.key -out apiserver.csr -subj "/CN=kubernetes/O=k8s" -config openssl.cnf
3
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out apiserver.pem -days 3650 -extensions v3_req -extfile openssl.cnf

admin集群管理员证书生成

1
openssl genrsa -out admin.key 2048
2
openssl req -new -key admin.key -out admin.csr -subj "/CN=admin/O=system:masters/OU=System"
3
openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out admin.pem -days 3650

节点Proxy证书生成

1
openssl genrsa -out proxy.key 2048
2
openssl req -new -key proxy.key -out proxy.csr -subj "/CN=system:kube-proxy"
3
openssl x509 -req -in proxy.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out proxy.pem -days 3650

准备Master节点证书组件

1
mkdir /etc/kubernetes/kubernetesTLS
2
cp ca.pem ca.key apiserver.key apiserver.pem admin.key admin.pem proxy.key proxy.pem /etc/kubernetes/kubernetesTLS
3
cp kube-apiserver  /usr/local/bin
4
cp kube-scheduler /usr/local/bin
5
cp kube-controller-manager /usr/local/bin

安装kube-apiserver

  • 创建TLS Bootstrapping Token
1
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
2
cat > /etc/kubernetes/BOOTSTRAP_TOKEN << EOF
3
$BOOTSTRAP_TOKEN
4
EOF
5
cat > /etc/kubernetes/token.csv << EOF
6
$BOOTSTRAP_TOKEN,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
7
EOF
  • 创建admin用户的集群参数
1
# 设置集群参数
2
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/kubernetesTLS/ca.pem --embed-certs=true --server=https://192.168.14.9:6443
3
4
# 设置管理员参数
5
kubectl config set-credentials admin --client-certificate=/etc/kubernetes/kubernetesTLS/admin.pem --client-key=/etc/kubernetes/kubernetesTLS/admin.key --embed-certs=true
6
7
# 设置管理员上下文参数
8
kubectl config set-context kubernetes --cluster=kubernetes --user=admin
9
10
# 设置集群默认上下文参数
11
kubectl config use-context kubernetes
  • 配置kube-apiserver
1
cat /usr/lib/systemd/system/kube-apiserver.service
2
[Unit]
3
Description=Kube-apiserver Service
4
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
5
6
After=network.target
7
[Service]
8
Type=notify
9
EnvironmentFile=-/etc/kubernetes/config
10
EnvironmentFile=-/etc/kubernetes/apiserver
11
ExecStart=/usr/local/bin/kube-apiserver \
12
        $KUBE_LOGTOSTDERR \
13
        $KUBE_LOG_LEVEL \
14
        $KUBE_ETCD_SERVERS \
15
        $KUBE_API_ADDRESS \
16
        $KUBE_API_PORT \
17
        $KUBELET_PORT \
18
        $KUBE_ALLOW_PRIV \
19
        $KUBE_SERVICE_ADDRESSES \
20
        $KUBE_ADMISSION_CONTROL \
21
        $KUBE_API_ARGS
22
Restart=always
23
LimitNOFILE=65536
24
25
[Install]
26
WantedBy=default.target
1
cat /etc/kubernetes/config
2
###
3
# kubernetes system config
4
#
5
# The following values are used to configure various aspects of all
6
# kubernetes services, including
7
#
8
#   kube-apiserver.service
9
#   kube-controller-manager.service
10
#   kube-scheduler.service
11
#   kubelet.service
12
#   kube-proxy.service
13
# logging to stderr means we get it in the systemd journal
14
# 表示错误日志记录到文件还是输出到stderr。
15
KUBE_LOGTOSTDERR="--logtostderr=true"
16
17
# journal message level, 0 is debug
18
# 日志等级。设置0则是debug等级
19
KUBE_LOG_LEVEL="--v=0"
20
21
# Should this cluster be allowed to run privileged docker containers
22
# 允许运行特权容器。
23
KUBE_ALLOW_PRIV="--allow-privileged=true"
24
25
# How the controller-manager, scheduler, and proxy find the apiserver
26
# 设置master服务器的访问
27
KUBE_MASTER="--master=http://192.168.14.9:8080"
1
cat /etc/kubernetes/apiserver
2
###
3
## kubernetes system config
4
##
5
## The following values are used to configure the kube-apiserver
6
##
7
#
8
## The address on the local server to listen to.
9
KUBE_API_ADDRESS="--advertise-address=192.168.14.9 --bind-address=192.168.14.9 --insecure-bind-address=192.168.14.9"
10
#
11
## The port on the local server to listen on.
12
#KUBE_API_PORT="--port=8080"
13
#
14
## Port minions listen on
15
#KUBELET_PORT="--kubelet-port=10250"
16
#
17
## Comma separated list of nodes in the etcd cluster
18
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.14.9:2379,https://192.168.14.34:2379,https://192.168.14.203:2379"
19
#
20
## Address range to use for services
21
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.6.0/24"
22
#
23
## default admission control policies
24
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota,NodeRestriction"
25
26
## Add your own!
27
KUBE_API_ARGS="--authorization-mode=Node,RBAC  --runtime-config=rbac.authorization.k8s.io/v1beta1  --kubelet-https=true  --token-auth-file=/etc/kubernetes/token.csv  --service-node-port-range=30000-32767  --tls-cert-file=/etc/kubernetes/kubernetesTLS/apiserver.pem  --tls-private-key-file=/etc/kubernetes/kubernetesTLS/apiserver.key  --client-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem  --service-account-key-file=/etc/kubernetes/kubernetesTLS/ca.key  --storage-backend=etcd3  --etcd-cafile=/etc/etcd/etcdSSL/ca.pem  --etcd-certfile=/etc/etcd/etcdSSL/etcd.pem  --etcd-keyfile=/etc/etcd/etcdSSL/etcd-key.pem  --enable-swagger-ui=true  --apiserver-count=3  --audit-log-maxage=30  --audit-log-maxbackup=3  --audit-log-maxsize=100  --audit-log-path=/var/lib/audit.log  --event-ttl=1h"
  • 启动kube-apiserver
1
systemctl daemon-reload
2
systemctl enable kube-apiserver
3
systemctl  start kube-apiserver

安装kube-scheduler

1
cat  /usr/lib/systemd/system/kube-scheduler.service
2
[Unit]
3
Description=Kube-scheduler Service
4
After=network.target
5
6
[Service]
7
Type=simple
8
EnvironmentFile=-/etc/kubernetes/config
9
EnvironmentFile=-/etc/kubernetes/scheduler
10
ExecStart=/usr/local/bin/kube-scheduler \
11
            $KUBE_LOGTOSTDERR \
12
            $KUBE_LOG_LEVEL \
13
            $KUBE_MASTER \
14
            $KUBE_SCHEDULER_ARGS
15
16
Restart=always
17
LimitNOFILE=65536
18
19
[Install]
20
WantedBy=default.target
1
cat /etc/kubernetes/scheduler
2
#wing values are used to configure the kubernetes scheduler
3
4
# defaults from config and scheduler should be adequate
5
6
# Add your own!
7
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"
  • 启动kube-scheduler
1
systemctl daemon-reload
2
systemctl enable kube-scheduler
3
systemctl start kube-scheduler

安装kube-controller-manager

1
cat /usr/lib/systemd/system/kube-controller-manager.service
2
[Unit]
3
Description=Kube-controller-manager Service
4
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
5
After=network.target
6
After=kube-apiserver.service
7
Requires=kube-apiserver.service
8
[Service]
9
Type=simple
10
EnvironmentFile=-/etc/kubernetes/config
11
EnvironmentFile=-/etc/kubernetes/controller-manager
12
ExecStart=/usr/local/bin/kube-controller-manager \
13
        $KUBE_LOGTOSTDERR \
14
        $KUBE_LOG_LEVEL \
15
        $KUBE_MASTER \
16
        $KUBE_CONTROLLER_MANAGER_ARGS
17
Restart=always
18
LimitNOFILE=65536
19
20
[Install]
21
WantedBy=default.target
1
cat /etc/kubernetes/controller-manager
2
###
3
# The following values are used to configure the kubernetes controller-manager
4
5
# defaults from config and apiserver should be adequate
6
7
# Add your own!
8
KUBE_CONTROLLER_MANAGER_ARGS=" --address=127.0.0.1  --service-cluster-ip-range=10.0.6.0/24  --cluster-name=kubernetes  --cluster-signing-cert-file=/etc/kubernetes/kubernetesTLS/ca.pem  --cluster-signing-key-file=/etc/kubernetes/kubernetesTLS/ca.key  --service-account-private-key-file=/etc/kubernetes/kubernetesTLS/ca.key  --root-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem  --leader-elect=true  --cluster-cidr=172.16.0.0/16"
  • 启动kube-controller-manager
1
systemctl daemon-reload
2
systemctl enable kube-controller-manager
3
systemctl start kube-controller-manager

创建kubeconfig文件及相关的集群参数

1
export kubernetesTLSDir=/etc/kubernetes/kubernetesTLS
2
export kubernetesDir=/etc/kubernetes
3
## 设置proxy的集群参数
4
kubectl config set-cluster kubernetes \
5
--certificate-authority=$kubernetesTLSDir/ca.pem \
6
--embed-certs=true \
7
--server=https://192.168.14.9:6443 \
8
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
9
10
## 设置kube-proxy用户的参数
11
kubectl config set-credentials kube-proxy \
12
--client-certificate=$kubernetesTLSDir/proxy.pem \
13
--client-key=$kubernetesTLSDir/proxy.key \
14
--embed-certs=true \
15
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
16
17
## 设置kubernetes集群中kube-proxy用户的上下文参数
18
kubectl config set-context default \
19
--cluster=kubernetes \
20
--user=kube-proxy \
21
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
22
23
## 设置kube-proxy用户的默认上下文参数
24
kubectl config use-context default --kubeconfig=$kubernetesDir/kube-proxy.kubeconfig

创建kube bootstapping kubeconfig文件即集群参数

1
export BOOTSTRAP_TOKEN=`cat /etc/kubernetes/BOOTSTRAP_TOKEN`
2
## 设置kubelet的集群参数
3
kubectl config set-cluster kubernetes \
4
--certificate-authority=$kubernetesTLSDir/ca.pem \
5
--embed-certs=true \
6
--server=https://192.168.14.9:6443 \
7
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
8
9
## 设置kubelet用户的参数
10
kubectl config set-credentials kubelet-bootstrap \
11
--token=$BOOTSTRAP_TOKEN \
12
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
13
14
## 设置kubernetes集群中kubelet用户的默认上下文参数
15
kubectl config set-context default \
16
--cluster=kubernetes \
17
--user=kubelet-bootstrap \
18
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
19
20
## 设置kubelet用户的默认上下文参数
21
kubectl config use-context default \
22
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
23
24
## 创建kubelet的RABC角色
25
kubectl create --insecure-skip-tls-verify clusterrolebinding kubelet-bootstrap \
26
--clusterrole=system:node-bootstrapper \
27
--user=kubelet-bootstrap

Kubernetes node节点安装

将master节点相关配置拷贝到node节点

1
cd /etc/kubernetes
2
3
scp -r bootstrap.kubeconfig config kube-proxy.kubeconfig kubernetesTLS 192.168.14.203:/etc/kubernetes
4
5
scp -r bootstrap.kubeconfig config kube-proxy.kubeconfig kubernetesTLS 192.168.14.34:/etc/kubernetes
  • etcd集群的节点和flannel网络需要,所以也必须有etcd的证书
1
ls /etc/etcd/etcdSSL
2
ca.pem  etcd-key.pem  etcd.pem
1
scp kubelet kube-proxy 192.168.14.203:/usr/local/bin
2
scp kubelet kube-proxy  192.168.14.34:/usr/local/bin

安装docker-ce

这里安装的是最新版的docker-ce.安装包到官网下载安装:
https://download.docker.com/linux/centos/7/x86_64/stable/Packages/

  • 启动docker
1
systemctl enable docker
2
systemctl start docker

安装kubelet

1
[Unit]
2
Description=Kubernetes Kubelet Server
3
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
4
After=docker.service
5
Requires=docker.service
6
7
[Service]
8
EnvironmentFile=-/etc/kubernetes/config
9
EnvironmentFile=-/etc/kubernetes/kubelet
10
ExecStart=/usr/local/bin/kubelet \
11
            $KUBE_LOGTOSTDERR \
12
            $KUBE_LOG_LEVEL \
13
            $KUBELET_CONFIG\
14
            $KUBELET_ADDRESS \
15
            $KUBELET_PORT \
16
            $KUBELET_HOSTNAME \
17
            $KUBELET_POD_INFRA_CONTAINER \
18
            $KUBELET_ARGS
19
Restart=on-failure
20
21
[Install]
22
WantedBy=multi-user.target
1
cat /etc/kubernetes/kubelet
2
# kubelet (minion) config
3
#
4
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
5
KUBELET_ADDRESS="--address=0.0.0.0"
6
#
7
## The port for the info server to serve on
8
KUBELET_PORT="--port=10250"
9
#
10
## You may leave this blank to use the actual hostname
11
KUBELET_HOSTNAME="--hostname-override=192.168.14.34"
12
#
13
## location of the api-server
14
KUBELET_CONFIG="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
15
#
16
## pod infrastructure container
17
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=reg.ops.com/google_containers/pause-amd64:3.1"
18
#
19
## Add your own!
20
KUBELET_ARGS="--cluster-dns=10.0.6.200  --serialize-image-pulls=false  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig  --cert-dir=/etc/kubernetes/kubernetesTLS  --cluster-domain=cluster.local.  --hairpin-mode=promiscuous-bridge "
  • 启动kubelet
1
systemctl enable kubelet
2
systemctl start kubelet
  • 在master节点认证csr
1
kubectl get csr
2
NAME                                                   AGE       REQUESTOR           CONDITION
3
node-csr-m1zSkjPvdWiDfS6Tpct_XMULRZ5uZ4UoSSH9Exx7gjk   13s       kubelet-bootstrap   Pending
4
5
kubectl certificate approve node-csr-m1zSkjPvdWiDfS6Tpct_XMULRZ5uZ4UoSSH9Exx7gjk
6
certificatesigningrequest.certificates.k8s.io/node-csr-m1zSkjPvdWiDfS6Tpct_XMULRZ5uZ4UoSSH9Exx7gjk approved
7
8
kubectl get node
9
NAME            STATUS    ROLES     AGE       VERSION
10
192.168.14.34   Ready     <none>    14s       v1.11.0

安装kube-proxy

1
cat /usr/lib/systemd/system/kube-proxy.service
2
[Unit]
3
Description=Kube Proxy Service
4
After=network.target
5
6
[Service]
7
Type=simple
8
EnvironmentFile=-/etc/kubernetes/config
9
EnvironmentFile=-/etc/kubernetes/proxy
10
ExecStart=/usr/local/bin/kube-proxy \
11
            $KUBE_LOGTOSTDERR \
12
            $KUBE_LOG_LEVEL \
13
            $KUBE_MASTER \
14
            $KUBE_PROXY_ARGS
15
16
Restart=always
17
LimitNOFILE=65536
18
19
[Install]
20
WantedBy=default.target
1
cat /etc/kubernetes/proxy
2
###
3
# kubernetes proxy config
4
5
# defaults from config and proxy should be adequate
6
7
# Add your own!
8
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig  --cluster-cidr=172.16.0.0/16"
  • 启动kube-proxy
1
systemctl daemon-reload
2
systemctl enable kube-proxy
3
systemctl start kube-proxy
  • 开启iptables上FORWARD表
1
iptables -P FORWARD ACCEPT

安装网络插件

flannel网络插件安装

无论master节点还是node节点都需要安装

1
~]# yum install flannel
2
3
~]# cat > flannel-config.json <<  EOF
4
{
5
    "Network": "172.16.0.0/16",
6
    "SubnetLen": 24,
7
    "Backend": {
8
        "Type": "vxlan"
9
    }
10
}
11
EOF
12
13
~]# etcdctl --ca-file=/etc/etcd/etcdSSL/ca.pem  --cert-file=/etc/etcd/etcdSSL/etcd.pem   --key-file=/etc/etcd/etcdSSL/etcd-key.pem set /k8s/network/config < flannel-config.json
14
15
~]#  cat /etc/sysconfig/flanneld
16
# Flanneld configuration options  
17
18
# etcd url location.  Point this to the server where etcd runs
19
FLANNEL_ETCD_ENDPOINTS="https://192.168.14.9:2379,https://192.168.14.34:2379,https://192.168.14.203:2379"
20
21
# etcd config key.  This is the configuration key that flannel queries
22
# For address range assignment
23
FLANNEL_ETCD_PREFIX="/k8s/network"
24
25
# Any additional options that you want to pass
26
FLANNEL_OPTIONS="--etcd-cafile=/etc/etcd/etcdSSL/ca.pem  --etcd-certfile=/etc/etcd/etcdSSL/etcd.pem  --etcd-keyfile=/etc/etcd/etcdSSL/etcd-key.pem --log_dir=/var/log/k8s/flannel/"
  • 启动flannel网络
1
systemctl  enable flanneld
2
systemctl start flanneld

docker使用flannel网络

1
~]# etcdctl   --ca-file=/etc/etcd/etcdSSL/ca.pem   --cert-file=/etc/etcd/etcdSSL/etcd.pem   --key-file=/etc/etcd/etcdSSL/etcd-key.pem ls /k8s/network/subnets
2
/k8s/network/subnets/172.16.54.0-24
3
/k8s/network/subnets/172.16.91.0-24
4
/k8s/network/subnets/172.16.81.0-24
  • 每个节点下会生成子网段信息
1
~]#  cat /run/flannel/subnet.env
2
FLANNEL_NETWORK=172.16.0.0/16
3
FLANNEL_SUBNET=172.16.54.1/24
4
FLANNEL_MTU=1450
5
FLANNEL_IPMASQ=false
  • 在node节点上的docker应用flannel网络
1
~]# cat /usr/lib/systemd/system/docker.service
2
[Unit]
3
Description=Docker Application Container Engine
4
Documentation=https://docs.docker.com
5
BindsTo=containerd.service
6
After=network-online.target firewalld.service containerd.service flannled.service
7
Wants=network-online.target
8
Requires=docker.socket
9
10
[Service]
11
Type=notify
12
# the default is not to use systemd for cgroups because the delegate issues still
13
# exists and systemd currently does not support the cgroup feature set required
14
# for containers run by docker
15
EnvironmentFile=-/var/run/flannel/subnet.env
16
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock  --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
17
ExecReload=/bin/kill -s HUP $MAINPID
18
TimeoutSec=0
19
RestartSec=2
20
Restart=always
21
22
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
23
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
24
# to make them work for either version of systemd.
25
StartLimitBurst=3
26
27
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
28
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
29
# this option work for either version of systemd.
30
StartLimitInterval=60s
31
32
# Having non-zero Limit*s causes performance problems due to accounting overhead
33
# in the kernel. We recommend using cgroups to do container-local accounting.
34
LimitNOFILE=infinity
35
LimitNPROC=infinity
36
LimitCORE=infinity
37
38
# Comment TasksMax if your systemd version does not support it.
39
# Only systemd 226 and above support this option.
40
TasksMax=infinity
41
42
# set delegate yes so that systemd does not reset the cgroups of docker containers
43
Delegate=yes
44
45
# kill only the docker process, not all processes in the cgroup
46
KillMode=process
47
48
[Install]
49
WantedBy=multi-user.target
  • 启动docker
1
systemctl daemon-reload
2
systemctl enable docker
3
systemctl start docker

CoreDNS安装

1
~]# cat coredns.yml
2
apiVersion: v1
3
kind: ServiceAccount
4
metadata:
5
  name: coredns
6
  namespace: kube-system
7
---
8
apiVersion: rbac.authorization.k8s.io/v1
9
kind: ClusterRole
10
metadata:
11
  labels:
12
    kubernetes.io/bootstrapping: rbac-defaults
13
  name: system:coredns
14
rules:
15
- apiGroups:
16
  - ""
17
  resources:
18
  - endpoints
19
  - services
20
  - pods
21
  - namespaces
22
  verbs:
23
  - list
24
  - watch
25
- apiGroups:
26
  - ""
27
  resources:
28
  - nodes
29
  verbs:
30
  - get
31
---
32
apiVersion: rbac.authorization.k8s.io/v1
33
kind: ClusterRoleBinding
34
metadata:
35
  annotations:
36
    rbac.authorization.kubernetes.io/autoupdate: "true"
37
  labels:
38
    kubernetes.io/bootstrapping: rbac-defaults
39
  name: system:coredns
40
roleRef:
41
  apiGroup: rbac.authorization.k8s.io
42
  kind: ClusterRole
43
  name: system:coredns
44
subjects:
45
- kind: ServiceAccount
46
  name: coredns
47
  namespace: kube-system
48
---
49
apiVersion: v1
50
kind: ConfigMap
51
metadata:
52
  name: coredns
53
  namespace: kube-system
54
data:
55
  Corefile: |
56
    .:53 {
57
        errors
58
        health
59
        kubernetes cluster.local 10.0.6.0/24 {
60
          pods insecure
61
          upstream
62
          fallthrough in-addr.arpa ip6.arpa
63
        }
64
        prometheus :9153
65
        proxy . /etc/resolv.conf
66
        cache 30
67
        reload
68
    }
69
---
70
apiVersion: apps/v1
71
kind: Deployment
72
metadata:
73
  name: coredns
74
  namespace: kube-system
75
  labels:
76
    k8s-app: kube-dns
77
    kubernetes.io/name: "CoreDNS"
78
spec:
79
  replicas: 2
80
  strategy:
81
    type: RollingUpdate
82
    rollingUpdate:
83
      maxUnavailable: 1
84
  selector:
85
    matchLabels:
86
      k8s-app: kube-dns
87
  template:
88
    metadata:
89
      labels:
90
        k8s-app: kube-dns
91
    spec:
92
      priorityClassName: system-cluster-critical
93
      serviceAccountName: coredns
94
      tolerations:
95
        - key: "CriticalAddonsOnly"
96
          operator: "Exists"
97
      nodeSelector:
98
        beta.kubernetes.io/os: linux
99
      containers:
100
      - name: coredns
101
        image: coredns/coredns:1.3.1
102
        imagePullPolicy: IfNotPresent
103
        resources:
104
          limits:
105
            memory: 170Mi
106
          requests:
107
            cpu: 100m
108
            memory: 70Mi
109
        args: [ "-conf", "/etc/coredns/Corefile" ]
110
        volumeMounts:
111
        - name: config-volume
112
          mountPath: /etc/coredns
113
          readOnly: true
114
        ports:
115
        - containerPort: 53
116
          name: dns
117
          protocol: UDP
118
        - containerPort: 53
119
          name: dns-tcp
120
          protocol: TCP
121
        - containerPort: 9153
122
          name: metrics
123
          protocol: TCP
124
        securityContext:
125
          allowPrivilegeEscalation: false
126
          capabilities:
127
            add:
128
            - NET_BIND_SERVICE
129
            drop:
130
            - all
131
          readOnlyRootFilesystem: true
132
        livenessProbe:
133
          httpGet:
134
            path: /health
135
            port: 8080
136
            scheme: HTTP
137
          initialDelaySeconds: 60
138
          timeoutSeconds: 5
139
          successThreshold: 1
140
          failureThreshold: 5
141
        readinessProbe:
142
          httpGet:
143
            path: /health
144
            port: 8080
145
            scheme: HTTP
146
      dnsPolicy: Default
147
      volumes:
148
        - name: config-volume
149
          configMap:
150
            name: coredns
151
            items:
152
            - key: Corefile
153
              path: Corefile
154
---
155
apiVersion: v1
156
kind: Service
157
metadata:
158
  name: kube-dns
159
  namespace: kube-system
160
  annotations:
161
    prometheus.io/port: "9153"
162
    prometheus.io/scrape: "true"
163
  labels:
164
    k8s-app: kube-dns
165
    kubernetes.io/cluster-service: "true"
166
    kubernetes.io/name: "CoreDNS"
167
spec:
168
  selector:
169
    k8s-app: kube-dns
170
  clusterIP:  10.0.6.200
171
  ports:
172
  - name: dns
173
    port: 53
174
    protocol: UDP
175
  - name: dns-tcp
176
    port: 53
177
    protocol: TCP
178
  - name: metrics
179
    port: 9153
180
    protocol: TCP
  • 启动coredns
1
kubectl apply -f coredns.yml

Traefik ingress安装

  • traefik rbac
1
---
2
kind: ClusterRole
3
apiVersion: rbac.authorization.k8s.io/v1beta1
4
metadata:
5
  name: traefik-ingress-controller
6
rules:
7
  - apiGroups:
8
      - ""
9
    resources:
10
      - services
11
      - endpoints
12
      - secrets
13
    verbs:
14
      - get
15
      - list
16
      - watch
17
  - apiGroups:
18
      - extensions
19
    resources:
20
      - ingresses
21
    verbs:
22
      - get
23
      - list
24
      - watch
25
  - apiGroups:
26
    - extensions
27
    resources:
28
    - ingresses/status
29
    verbs:
30
    - update
31
---
32
kind: ClusterRoleBinding
33
apiVersion: rbac.authorization.k8s.io/v1beta1
34
metadata:
35
  name: traefik-ingress-controller
36
roleRef:
37
  apiGroup: rbac.authorization.k8s.io
38
  kind: ClusterRole
39
  name: traefik-ingress-controller
40
subjects:
41
- kind: ServiceAccount
42
  name: traefik-ingress-controller
43
  namespace: kube-system
  • traefik daemonset
1
---
2
apiVersion: v1
3
kind: ServiceAccount
4
metadata:
5
  name: traefik-ingress-controller
6
  namespace: kube-system
7
---
8
kind: DaemonSet
9
apiVersion: extensions/v1beta1
10
metadata:
11
  name: traefik-ingress-controller
12
  namespace: kube-system
13
  labels:
14
    k8s-app: traefik-ingress-lb
15
spec:
16
  template:
17
    metadata:
18
      labels:
19
        k8s-app: traefik-ingress-lb
20
        name: traefik-ingress-lb
21
    spec:
22
      serviceAccountName: traefik-ingress-controller
23
      terminationGracePeriodSeconds: 60
24
      containers:
25
      - image: traefik
26
        name: traefik-ingress-lb
27
        ports:
28
        - name: http
29
          containerPort: 80
30
          hostPort: 80
31
        - name: admin
32
          containerPort: 8080
33
          hostPort: 8080
34
        securityContext:
35
          capabilities:
36
            drop:
37
            - ALL
38
            add:
39
            - NET_BIND_SERVICE
40
        args:
41
        - --api
42
        - --kubernetes
43
        - --logLevel=INFO
44
---
45
kind: Service
46
apiVersion: v1
47
metadata:
48
  name: traefik-ingress-service
49
  namespace: kube-system
50
spec:
51
  selector:
52
    k8s-app: traefik-ingress-lb
53
  ports:
54
    - protocol: TCP
55
      port: 80
56
      name: web
57
    - protocol: TCP
58
      port: 8080
59
      name: admin
  • traefik ui ingress
1
---
2
apiVersion: v1
3
kind: Service
4
metadata:
5
  name: traefik-web-ui
6
  namespace: kube-system
7
spec:
8
  selector:
9
    k8s-app: traefik-ingress-lb
10
  ports:
11
  - name: web
12
    port: 80
13
    targetPort: 8080
14
---
15
apiVersion: extensions/v1beta1
16
kind: Ingress
17
metadata:
18
  name: traefik-web-ui
19
  namespace: kube-system
20
spec:
21
  rules:
22
  - host: traefik-ui.ops.com
23
    http:
24
      paths:
25
      - path: /
26
        backend:
27
          serviceName: traefik-web-ui
28
          servicePort: web

Nginx ingress安装

  • namespace.yml
1
apiVersion: v1
2
kind: Namespace
3
metadata:
4
  name: ingress-nginx
  • default-backend.yml
1
apiVersion: extensions/v1beta1
2
kind: Deployment
3
metadata:
4
  name: default-http-backend
5
  labels:
6
    app: default-http-backend
7
  namespace: ingress-nginx
8
spec:
9
  replicas: 1
10
  selector:
11
    matchLabels:
12
      app: default-http-backend
13
  template:
14
    metadata:
15
      labels:
16
        app: default-http-backend
17
    spec:
18
      terminationGracePeriodSeconds: 60
19
      containers:
20
      - name: default-http-backend
21
        # Any image is permissible as long as:
22
        # 1. It serves a 404 page at /
23
        # 2. It serves 200 on a /healthz endpoint
24
        image: reg.ops.com/google_containers/defaultbackend:1.4
25
        livenessProbe:
26
          httpGet:
27
            path: /healthz
28
            port: 8080
29
            scheme: HTTP
30
          initialDelaySeconds: 30
31
          timeoutSeconds: 5
32
        ports:
33
        - containerPort: 8080
34
        resources:
35
          limits:
36
            cpu: 100m
37
            memory: 200Mi
38
          requests:
39
            cpu: 100m
40
            memory: 200Mi
41
---
42
apiVersion: v1
43
kind: Service
44
metadata:
45
  name: default-http-backend
46
  namespace: ingress-nginx
47
  labels:
48
    app: default-http-backend
49
spec:
50
  ports:
51
  - port: 80
52
    targetPort: 8080
53
  selector:
54
    app: default-http-backend
  • configmap.yml
1
kind: ConfigMap
2
apiVersion: v1
3
metadata:
4
  name: nginx-configuration
5
  namespace: ingress-nginx
6
  labels:
7
    app: ingress-nginx
  • tcp-services-configmap.yml
1
kind: ConfigMap
2
apiVersion: v1
3
metadata:
4
  name: tcp-services
5
  namespace: ingress-nginx
  • udp-services-configmap.yml
1
kind: ConfigMap
2
apiVersion: v1
3
metadata:
4
  name: udp-services
5
  namespace: ingress-nginx
  • rbac.yml
1
apiVersion: v1
2
kind: ServiceAccount
3
metadata:
4
  name: nginx-ingress-serviceaccount
5
  namespace: ingress-nginx
6
7
---
8
9
apiVersion: rbac.authorization.k8s.io/v1beta1
10
kind: ClusterRole
11
metadata:
12
  name: nginx-ingress-clusterrole
13
rules:
14
  - apiGroups:
15
      - ""
16
    resources:
17
      - configmaps
18
      - endpoints
19
      - nodes
20
      - pods
21
      - secrets
22
    verbs:
23
      - list
24
      - watch
25
  - apiGroups:
26
      - ""
27
    resources:
28
      - nodes
29
    verbs:
30
      - get
31
  - apiGroups:
32
      - ""
33
    resources:
34
      - services
35
    verbs:
36
      - get
37
      - list
38
      - watch
39
  - apiGroups:
40
      - "extensions"
41
    resources:
42
      - ingresses
43
    verbs:
44
      - get
45
      - list
46
      - watch
47
  - apiGroups:
48
      - ""
49
    resources:
50
        - events
51
    verbs:
52
        - create
53
        - patch
54
  - apiGroups:
55
      - "extensions"
56
    resources:
57
      - ingresses/status
58
    verbs:
59
      - update
60
61
---
62
63
apiVersion: rbac.authorization.k8s.io/v1beta1
64
kind: Role
65
metadata:
66
  name: nginx-ingress-role
67
  namespace: ingress-nginx
68
rules:
69
  - apiGroups:
70
      - ""
71
    resources:
72
      - configmaps
73
      - pods
74
      - secrets
75
      - namespaces
76
    verbs:
77
      - get
78
  - apiGroups:
79
      - ""
80
    resources:
81
      - configmaps
82
    resourceNames:
83
      # Defaults to "<election-id>-<ingress-class>"
84
      # Here: "<ingress-controller-leader>-<nginx>"
85
      # This has to be adapted if you change either parameter
86
      # when launching the nginx-ingress-controller.
87
      - "ingress-controller-leader-nginx"
88
    verbs:
89
      - get
90
      - update
91
  - apiGroups:
92
      - ""
93
    resources:
94
      - configmaps
95
    verbs:
96
      - create
97
  - apiGroups:
98
      - ""
99
    resources:
100
      - endpoints
101
    verbs:
102
      - get
103
104
---
105
106
apiVersion: rbac.authorization.k8s.io/v1beta1
107
kind: RoleBinding
108
metadata:
109
  name: nginx-ingress-role-nisa-binding
110
  namespace: ingress-nginx
111
roleRef:
112
  apiGroup: rbac.authorization.k8s.io
113
  kind: Role
114
  name: nginx-ingress-role
115
subjects:
116
  - kind: ServiceAccount
117
    name: nginx-ingress-serviceaccount
118
    namespace: ingress-nginx
119
120
---
121
122
apiVersion: rbac.authorization.k8s.io/v1beta1
123
kind: ClusterRoleBinding
124
metadata:
125
  name: nginx-ingress-clusterrole-nisa-binding
126
roleRef:
127
  apiGroup: rbac.authorization.k8s.io
128
  kind: ClusterRole
129
  name: nginx-ingress-clusterrole
130
subjects:
131
  - kind: ServiceAccount
132
    name: nginx-ingress-serviceaccount
133
    namespace: ingress-nginx
  • with-rbac.yml
1
apiVersion: extensions/v1beta1
2
kind: DaemonSet
3
metadata:
4
  name: nginx-ingress-controller
5
  namespace: ingress-nginx 
6
spec:
7
  selector:
8
    matchLabels:
9
      app: ingress-nginx
10
  template:
11
    metadata:
12
      labels:
13
        app: ingress-nginx
14
      annotations:
15
        prometheus.io/port: '10254'
16
        prometheus.io/scrape: 'true'
17
    spec:
18
      nodeSelector:
19
        custom/ingress-controller-ready: "true"
20
      serviceAccountName: nginx-ingress-serviceaccount
21
      hostNetwork: true
22
      containers:
23
        - name: nginx-ingress-controller
24
          image: reg.ops.com/google_containers/nginx-ingress-controller:0.11.0
25
          args:
26
            - /nginx-ingress-controller
27
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
28
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
29
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
30
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
31
            - --annotations-prefix=nginx.ingress.kubernetes.io
32
          env:
33
            - name: POD_NAME
34
              valueFrom:
35
                fieldRef:
36
                  fieldPath: metadata.name
37
            - name: POD_NAMESPACE
38
              valueFrom:
39
                fieldRef:
40
                  fieldPath: metadata.namespace
41
          ports:
42
          - name: http
43
            containerPort: 80
44
          - name: https
45
            containerPort: 443
46
          livenessProbe:
47
            failureThreshold: 3
48
            httpGet:
49
              path: /healthz
50
              port: 10254
51
              scheme: HTTP
52
            initialDelaySeconds: 10
53
            periodSeconds: 10
54
            successThreshold: 1
55
            timeoutSeconds: 1
56
          readinessProbe:
57
            failureThreshold: 3
58
            httpGet:
59
              path: /healthz
60
              port: 10254
61
              scheme: HTTP
62
            periodSeconds: 10
63
            successThreshold: 1
64
            timeoutSeconds: 1

kube dashboard

  • kubernetes-dashboard
1
# Copyright 2017 The Kubernetes Authors.
2
#
3
# Licensed under the Apache License, Version 2.0 (the "License");
4
# you may not use this file except in compliance with the License.
5
# You may obtain a copy of the License at
6
#
7
#     http://www.apache.org/licenses/LICENSE-2.0
8
#
9
# Unless required by applicable law or agreed to in writing, software
10
# distributed under the License is distributed on an "AS IS" BASIS,
11
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
# See the License for the specific language governing permissions and
13
# limitations under the License.
14
15
# ------------------- Dashboard Secret ------------------- #
16
17
apiVersion: v1
18
kind: Secret
19
metadata:
20
  labels:
21
    k8s-app: kubernetes-dashboard
22
  name: kubernetes-dashboard-certs
23
  namespace: kube-system
24
type: Opaque
25
26
---
27
# ------------------- Dashboard Service Account ------------------- #
28
29
apiVersion: v1
30
kind: ServiceAccount
31
metadata:
32
  labels:
33
    k8s-app: kubernetes-dashboard
34
  name: kubernetes-dashboard
35
  namespace: kube-system
36
37
---
38
# ------------------- Dashboard Role & Role Binding ------------------- #
39
40
kind: Role
41
apiVersion: rbac.authorization.k8s.io/v1
42
metadata:
43
  name: kubernetes-dashboard-minimal
44
  namespace: kube-system
45
rules:
46
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
47
- apiGroups: [""]
48
  resources: ["secrets"]
49
  verbs: ["create"]
50
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
51
- apiGroups: [""]
52
  resources: ["configmaps"]
53
  verbs: ["create"]
54
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
55
- apiGroups: [""]
56
  resources: ["secrets"]
57
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
58
  verbs: ["get", "update", "delete"]
59
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
60
- apiGroups: [""]
61
  resources: ["configmaps"]
62
  resourceNames: ["kubernetes-dashboard-settings"]
63
  verbs: ["get", "update"]
64
  # Allow Dashboard to get metrics from heapster.
65
- apiGroups: [""]
66
  resources: ["services"]
67
  resourceNames: ["heapster"]
68
  verbs: ["proxy"]
69
- apiGroups: [""]
70
  resources: ["services/proxy"]
71
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
72
  verbs: ["get"]
73
74
---
75
apiVersion: rbac.authorization.k8s.io/v1
76
kind: RoleBinding
77
metadata:
78
  name: kubernetes-dashboard-minimal
79
  namespace: kube-system
80
roleRef:
81
  apiGroup: rbac.authorization.k8s.io
82
  kind: Role
83
  name: kubernetes-dashboard-minimal
84
subjects:
85
- kind: ServiceAccount
86
  name: kubernetes-dashboard
87
  namespace: kube-system
88
89
---
90
# ------------------- Dashboard Deployment ------------------- #
91
92
kind: Deployment
93
apiVersion: apps/v1beta2
94
metadata:
95
  labels:
96
    k8s-app: kubernetes-dashboard
97
  name: kubernetes-dashboard
98
  namespace: kube-system
99
spec:
100
  replicas: 1
101
  revisionHistoryLimit: 10
102
  selector:
103
    matchLabels:
104
      k8s-app: kubernetes-dashboard
105
  template:
106
    metadata:
107
      labels:
108
        k8s-app: kubernetes-dashboard
109
    spec:
110
      containers:
111
      - name: kubernetes-dashboard
112
        image: gcrxio/kubernetes-dashboard-amd64:v1.11.0
113
        ports:
114
        - containerPort: 9090
115
          protocol: TCP
116
        args:
117
          #- --auto-generate-certificates
118
          # Uncomment the following line to manually specify Kubernetes API server Host
119
          # If not specified, Dashboard will attempt to auto discover the API server and connect
120
          # to it. Uncomment only if the default does not work.
121
          # - --apiserver-host=http://my-address:port
122
        volumeMounts:
123
        - name: kubernetes-dashboard-certs
124
          mountPath: /certs
125
          # Create on-disk volume to store exec logs
126
        - mountPath: /tmp
127
          name: tmp-volume
128
        livenessProbe:
129
          httpGet:
130
            scheme: HTTP
131
            path: /
132
            port: 9090
133
          initialDelaySeconds: 30
134
          timeoutSeconds: 30
135
      volumes:
136
      - name: kubernetes-dashboard-certs
137
        secret:
138
          secretName: kubernetes-dashboard-certs
139
      - name: tmp-volume
140
        emptyDir: {}
141
      serviceAccountName: kubernetes-dashboard
142
      # Comment the following tolerations if Dashboard must not be deployed on master
143
      tolerations:
144
      - key: node-role.kubernetes.io/master
145
        effect: NoSchedule
146
147
---
148
# ------------------- Dashboard Service ------------------- #
149
150
kind: Service
151
apiVersion: v1
152
metadata:
153
  labels:
154
    k8s-app: kubernetes-dashboard
155
  name: kubernetes-dashboard
156
  namespace: kube-system
157
spec:
158
  ports:
159
    - port: 80
160
      targetPort: 9090
161
  selector:
162
    k8s-app: kubernetes-dashboard
  • kubernetes-dashboard-ingress
1
apiVersion: extensions/v1beta1
2
kind: Ingress
3
metadata:
4
  name: kubernetes-dashboard
5
  namespace: kube-system
6
spec:
7
  rules:
8
  - host: kube-dashboard.ops.com
9
    http:
10
      paths:
11
      - path: /
12
        backend:
13
          serviceName: kubernetes-dashboard
14
          servicePort: 80
  • kubernetes-dashboard-admin
1
apiVersion: rbac.authorization.k8s.io/v1beta1
2
kind: ClusterRoleBinding
3
metadata:
4
  name: kubernetes-dashboard
5
  labels:
6
    k8s-app: kubernetes-dashboard
7
roleRef:
8
  apiGroup: rbac.authorization.k8s.io
9
  kind: ClusterRole
10
  name: cluster-admin
11
subjects:
12
- kind: ServiceAccount
13
  name: kubernetes-dashboard
14
  namespace: kube-system
CATALOG
  1. 1. 准备工作
  2. 2. etcd集群安装
    1. 2.1. cfssl工具准备
    2. 2.2. etcd证书创建
    3. 2.3. etcd安装
    4. 2.4. 验证etcd集群
  3. 3. Kubernetes Master节点安装
    1. 3.1. 准备kubernetes集群所需要的TLS证书文件
    2. 3.2. 创建CA证书
    3. 3.3. 创建apiserver证书
    4. 3.4. admin集群管理员证书生成
    5. 3.5. 节点Proxy证书生成
    6. 3.6. 准备Master节点证书组件
    7. 3.7. 安装kube-apiserver
    8. 3.8. 安装kube-scheduler
    9. 3.9. 安装kube-controller-manager
    10. 3.10. 创建kubeconfig文件及相关的集群参数
    11. 3.11. 创建kube bootstapping kubeconfig文件即集群参数
  4. 4. Kubernetes node节点安装
    1. 4.1. 安装docker-ce
    2. 4.2. 安装kubelet
    3. 4.3. 安装kube-proxy
  5. 5. 安装网络插件
    1. 5.1. flannel网络插件安装
    2. 5.2. docker使用flannel网络
  6. 6. CoreDNS安装
  7. 7. Traefik ingress安装
  8. 8. Nginx ingress安装
  9. 9. kube dashboard