部署遇到的问题

挂载目录没有写入权限

image.png

修改容器启动命令,查看到用户 id 是 1001

image.png

官方也有说明
image.png

解决方法 1: 使用 initcontainer 授权

1
2
3
4
5
6
7
8
initContainers:
- name: init
image: busybox:1.28
command: ['sh', '-c', "chown -R 1001:1001 /bitnami/"]
volumeMounts:
- name: data
mountPath: /bitnami/zookeeper

image.png|580

解决方法 2: 增加安全上下文,使用 root 用户

这么做会降低容器的安全性,不推荐!

1
2
3
securityContext:
runAsUser: 0
runAsGroup: 0

image.png

最终的 yaml 文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zookeeper
spec:
ports:
- name: tcp-client
protocol: TCP
port: 2181
targetPort: 2181
- name: tcp-follower
port: 2888
targetPort: 2888
- name: tcp-election
port: 3888
targetPort: 3888
selector:
app: zookeeper
clusterIP: None
type: ClusterIP

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
replicas: 3
selector:
matchLabels:
app: zookeeper
serviceName: zk-hs
template:
metadata:
name: zookeeper
labels:
app: zookeeper
spec:
# 使用 initContainers 解决挂载目录没有权限写入问题
initContainers:
- name: init
image: busybox:1.28
command: ['sh', '-c', "chown -R 1001:1001 /bitnami/"]
volumeMounts:
- name: data
mountPath: /bitnami/zookeeper

containers:
- name: zookeeper
image: bitnami/zookeeper:3.8.2
command:
- bash
- '-ec'
- |
HOSTNAME="$(hostname -s)"
if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then
ORD=${BASH_REMATCH[2]}
export ZOO_SERVER_ID="$((ORD + 1 ))"
else
echo "Failed to get index from hostname $HOST"
exit 1
fi
exec /entrypoint.sh /run.sh
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 50m
memory: 500Mi
env:
- name: ZOO_ENABLE_AUTH
value: "no"
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
- name: ZOO_SERVERS
value: >
zookeeper-0.zk-hs.default.svc.cluster.local:2888:3888
zookeeper-1.zk-hs.default.svc.cluster.local:2888:3888
zookeeper-2.zk-hs.default.svc.cluster.local:2888:3888
ports:
- name: client
containerPort: 2181
- name: follower
containerPort: 2888
- name: election
containerPort: 3888
livenessProbe:
tcpSocket:
port: client
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: client
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
volumeMounts:
- name: data
mountPath: /bitnami/zookeeper
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "longhorn"
resources:
requests:
storage: 2Gi

集群验证

1
2
3
4
5
6
7
8
9
10
11
12
# 检查FQDN
for i in 0 1 2; do kubectl exec zookeeper-$i -- hostname -f; done

# 检查生成的myid文件
for i in 0 1 2; do echo "myid zookeeper-$i";kubectl exec zookeeper-$i -- cat /bitnami/zookeeper/data/myid; done

# 检查自动生成的配置文件
kubectl exec -it zookeeper-0 -- cat /opt/bitnami/zookeeper/conf/zoo.cfg | grep -vE "^#|^$"

# 检查集群状态
for i in 0 1 2; do echo -e "# myid zookeeper-$i \n";kubectl exec zookeeper-$i -- opt/bitnami/zookeeper/bin/zkServer.sh status;echo -e "\n"; done

数据验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
kubectl run --rm -it zookeeper-client --image=zookeeper:3.8.2 -- bash

cd /apache-zookeeper-3.8.2-bin/bin

# 连上 zookeeper 第一个节点
./zkCli.sh -server zookeeper-0.zk-hs.default.svc.cluster.local:2181

# 创建测试数据
[zk: zookeeper-0.zk-hs.default.svc.cluster.local:2181(CONNECTED) 0] create /test test-data
Created /test

# 读取测试数据
[zk: zookeeper-0.zk-hs.default.svc.cluster.local:2181(CONNECTED) 1] get /test
test-data

# 连上 zookeeper 第二个节点
./zkCli.sh -server zookeeper-1.zk-hs.default.svc.cluster.local:2181
# 获取数据
[zk: zookeeper-1.zk-hs.default.svc.cluster.local:2181(CONNECTED) 0] get /test
test-data