ConfigMap 簡介
ConfigMap 是 k8s 中的一種 API 對象,用於鏡像和配置文件解耦(對標非 k8s 環境,我們經常用配置管理中心解耦代碼和配置,其實是一個意思),這樣鏡像就具備了可移植性和可複用性。Pods 可以將其用作環境變量、命令行參數或者存儲卷中的配置文件。在生產環境中,它作為環境變量配置的使用非常常見。
跟它類似的,還有另一個 API 對象 Secret 。
二者的區別是,前者用於存儲不敏感和非保密性數據。例如 ip 和端口。後者用於存儲敏感和保密性數據,例如用户名和密碼,秘鑰,等等,使用 base64 編碼儲存。
關於 configmap 的更多內容可以看看官網:
https://kubernetes.io/zh-cn/d...
使用 ConfigMap 的限制條件
- ConfigMap 要在 Pod 啓動前創建好。因為它是要被 Pod 使用的嘛。
- 只有當 ConfigMap 和 Pod 處於同一個 NameSpace 時 Pod 才可以引用它。
- 當 Pod 對 ConfigMap 進行掛載(VolumeMount)操作時,在容器內部只能掛載為目錄,不能掛載為文件。
- 當掛載已經存在的目錄時,且目錄內含有其它文件,ConfigMap 會將其覆蓋掉。
實操
本次操作,最初的 yaml 配置如下,總共包含三個部分:
- ConfigMap
- Deployment
- Service
也可以將這三個部分拆分到 3 個 yaml 文件中分別執行。
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
#sendfile on;
keepalive_timeout 1800;
log_format main
'remote_addr:$remote_addr '
'time_local:$time_local '
'method:$request_method '
'uri:$request_uri '
'host:$host '
'status:$status '
'bytes_sent:$body_bytes_sent '
'referer:$http_referer '
'useragent:$http_user_agent '
'forwardedfor:$http_x_forwarded_for '
'request_time:$request_time';
access_log /var/log/nginx/access.log main;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
}
include /etc/nginx/conf.d/*.conf;
}
virtualhost.conf: |
upstream app {
server localhost:8080;
keepalive 1024;
}
server {
listen 80 default_server;
root /usr/local/app;
access_log /var/log/nginx/app.access_log main;
error_log /var/log/nginx/app.error_log;
location / {
proxy_pass http://app/;
proxy_http_version 1.1;
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-demo-nginx
spec:
replicas: 1
selector:
matchLabels:
app: my-demo-nginx
template:
metadata:
labels:
app: my-demo-nginx
spec:
containers:
- name: my-demo-nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/nginx.conf # mount nginx-conf volumn to /etc/nginx
#readOnly: true
#name: nginx-conf
#name: my-demo-nginx
name: nginx
subPath: nginx.conf
- mountPath: /var/log/nginx
name: log
volumes:
- name: nginx
configMap:
name: nginx-conf # place ConfigMap `nginx-conf` on /etc/nginx
items:
- key: nginx.conf
path: nginx.conf
- key: virtualhost.conf
path: conf.d/virtualhost.conf # dig directory
- name: log
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service #定義service名稱為nginx-service
labels:
app: nginx-service #為service打上app標籤
spec:
type: NodePort #使用NodePort方式開通,在每個Node上分配一個端口作為外部訪問入口
#type: LoadBalancer #工作在特定的Cloud Provider上,例如Google Cloud,AWS,OpenStack
#type: ClusterIP #默認,分配一個集羣內部可以訪問的虛擬IP(VIP)
ports:
- port: 8000 #port是k8s集羣內部訪問service的端口,即通過clusterIP: port可以訪問到某個service
targetPort: 80 #targetPort是pod的端口,從port和nodePort來的流量經過kube-proxy流入到後端pod的targetPort上,最後進入容器
nodePort: 32500 #nodePort是外部訪問k8s集羣中service的端口,通過nodeIP: nodePort可以從外部訪問到某個service
selector:
app: my-nginx
執行該 yaml 文件,遇到了問題:
本模板&此實操中 Deployment 的配置,它的 spec.template.spec.containers.volumeMounts.name 的值使用 nginx 才能成功,如果是 my-demo-nginx 則報錯如下:
[root@k8s-master k8s-install]# kubectl create -f configmap-nginx.yaml
configmap/nginx-conf created
service/nginx-service created
The Deployment "my-demo-nginx" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "my-demo-nginx"
如果是 nginx-conf 則報錯如下:
[root@k8s-master k8s-install]# kubectl apply -f configmap-nginx.yaml
Warning: resource configmaps/nginx-conf is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
configmap/nginx-conf configured
Warning: resource services/nginx-service is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
service/nginx-service configured
The Deployment "my-demo-nginx" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "nginx-conf"
當改為 nginx 後才正常:
[root@k8s-master k8s-install]# kubectl apply -f configmap-nginx.yaml
configmap/nginx-conf unchanged
deployment.apps/my-demo-nginx created
service/nginx-service unchanged
該報錯的原因是 spec.template.spec.volumes.name 的值為 nginx,二者需保持一致(編輯和使用 yaml 比較蛋疼的地方也在於此:字段越多,坑越多。官方沒有詳細的説明文檔直接告訴你每個字段的意義,以及哪裏跟哪裏應該保持一致,哪裏需要注意什麼,等等。很多東西需要你通過實踐自己發現。所以 yaml 的最佳實踐也不是很容易總結)。
查看 ConfigMap、Deployment、Service、Pod 是否創建成功:
[root@k8s-master k8s-install]# kubectl get cm nginx-conf
NAME DATA AGE
nginx-conf 2 12m
[root@k8s-master k8s-install]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
my-demo-nginx 0/2 2 0 7m29s
[root@k8s-master k8s-install]# kubectl get svc nginx-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service NodePort 10.103.221.135 <none> 8000:32500/TCP 10m
[root@k8s-master k8s-install]# kubectl get po -A | grep nginx
default my-demo-nginx-7bff4cd4dd-hs4bx 0/1 CrashLoopBackOff 6 (4m18s ago) 10m
default my-demo-nginx-7bff4cd4dd-tdjqd 0/1 CrashLoopBackOff 6 (4m11s ago) 10m
此時遇到了第二個問題:
查看 pod 狀態發現一直是 CrashLoopBackOff 狀態,説明 pod 沒有正常運行。
查看 pod 和容器的日誌,均為空。
使用 describe 查看 pod 事件,只有一條無價值的信息:Back-off restarting failed container。刪除剛才部署的一切反覆重新創建也不行。
[root@k8s-master k8s-install]# kubectl describe po my-demo-nginx-9c5b6cc8c-5bws7
Name: my-demo-nginx-9c5b6cc8c-5bws7
Namespace: default
Priority: 0
Node: k8s-slave2/192.168.100.22
Start Time: Tue, 14 Jun 2022 00:13:44 +0800
Labels: app=my-demo-nginx
pod-template-hash=9c5b6cc8c
Annotations: <none>
Status: Running
IP: 10.244.1.25
IPs:
IP: 10.244.1.25
Controlled By: ReplicaSet/my-demo-nginx-9c5b6cc8c
Containers:
my-demo-nginx:
Container ID: docker://cd6c27ee399cce85adf64465dce43e7b361c92f5f85b46944a2749068a111a5e
Image: 192.168.100.20:8888/my-demo/nginx
Image ID: docker-pullable://192.168.100.20:5000/mynginx@sha256:a12ca72a7db4dfdd981003da98491c8019d2de1da53c1bff4f1b378da5b1ca32
Port: 80/TCP
Host Port: 0/TCP
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 14 Jun 2022 00:14:22 +0800
Finished: Tue, 14 Jun 2022 00:14:22 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 14 Jun 2022 00:13:59 +0800
Finished: Tue, 14 Jun 2022 00:13:59 +0800
Ready: False
Restart Count: 3
Environment: <none>
Mounts:
/etc/nginx/nginx.conf from nginx (rw,path="nginx.conf")
/var/log/nginx from log (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nxbxk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nginx:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: nginx-conf
Optional: false
log:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-nxbxk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 39s default-scheduler Successfully assigned default/my-demo-nginx-9c5b6cc8c-5bws7 to k8s-slave2
Normal Pulled 38s kubelet Successfully pulled image "192.168.100.20:8888/my-demo/nginx" in 135.495946ms
Normal Pulled 37s kubelet Successfully pulled image "192.168.100.20:8888/my-demo/nginx" in 68.384569ms
Normal Pulled 24s kubelet Successfully pulled image "192.168.100.20:8888/my-demo/nginx" in 81.06582ms
Normal Created 1s (x4 over 38s) kubelet Created container my-demo-nginx
Normal Started 1s (x4 over 38s) kubelet Started container my-demo-nginx
Normal Pulling 1s (x4 over 38s) kubelet Pulling image "192.168.100.20:8888/my-demo/nginx"
Normal Pulled 1s kubelet Successfully pulled image "192.168.100.20:8888/my-demo/nginx" in 178.317289ms
Warning BackOff 0s (x5 over 36s) kubelet Back-off restarting failed container
解決辦法:
在 spec.template.spec.containers.image 這一行下面增加如下 3 行配置:分別是 command、args、imagePullPolicy 這個 3 個參數:
spec:
containers:
- name: my-demo-nginx
image: 192.168.100.20:8888/my-demo/nginx
command: ["/bin/bash", "-c", "--"]
args: ["while true; do sleep 30; done;"]
imagePullPolicy: IfNotPresent
重新部署,pod 終於正常運行:
[root@k8s-master k8s-install]# kubectl create -f configmap-nginx.yaml
configmap/nginx-conf created
deployment.apps/my-demo-nginx created
service/nginx-service created
[root@k8s-master k8s-install]#
[root@k8s-master k8s-install]# kubectl get po
NAME READY STATUS RESTARTS AGE
my-demo-nginx-6849476467-7bml4 1/1 Running 0 10s
原因:
本次導致 pod 一直沒能正常運行的原因是引用的 nginx 鏡像自身存在問題,容器中的 nginx 進程沒能正常啓動,所以容器和 pod 都無法正常運行。所以增加以上參數後,使用 /bin/bash 代替 nginx 作為根進程,容器就能正常運行了。這是容器運行的設定決定的。不懂的同學再複習一下容器的內容。
知道了問題的原因,就能知道本次實操中後來追加的三個參數,其實核心參數只有一個,就是 command,另外兩個參數可有可無。所以也可以只增加一個參數即可,例如:command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
如果是引用的 nginx 鏡像沒問題的話,就不會遇到第二個問題。所以這裏要順便分享下本次的解決思路:當 pod 狀態為 CrashLoopBackOff,並且 pod 和容器的日誌都為空,可以考慮是引用的容器鏡像出了問題。以本例來説,鏡像啓動容器後,原本應該運行的 nginx 進程沒有啓動。加個 command 參數很容易就能驗出來這個問題。
如下所示,當 pod 和容器正常啓動後(通過增加 command 參數),登錄容器後發現,根進程是 /bin/bash,而不是 nginx,從而驗證了前面的判斷:
[root@k8s-master ~]# kubectl exec my-demo-nginx-6849476467-7bml4 -c my-demo-nginx -it -- /bin/bash
[root@my-demo-nginx-6849476467-7bml4 /]#
[root@my-demo-nginx-6849476467-7bml4 /]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 11904 1512 ? Ss Jun14 0:00 /bin/bash -c -- while true; do sleep 30; done;
root 1463 0.0 0.1 12036 2096 pts/0 Ss 03:34 0:00 /bin/bash
root 1479 0.0 0.0 23032 932 ? S 03:34 0:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 30
root 1480 0.0 0.0 44652 1772 pts/0 R+ 03:34 0:00 ps aux
最終完整的配置如下:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
#sendfile on;
keepalive_timeout 1800;
log_format main
'remote_addr:$remote_addr '
'time_local:$time_local '
'method:$request_method '
'uri:$request_uri '
'host:$host '
'status:$status '
'bytes_sent:$body_bytes_sent '
'referer:$http_referer '
'useragent:$http_user_agent '
'forwardedfor:$http_x_forwarded_for '
'request_time:$request_time';
access_log /var/log/nginx/access.log main;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
}
include /etc/nginx/conf.d/*.conf;
}
virtualhost.conf: |
upstream app {
server localhost:8080;
keepalive 1024;
}
server {
listen 80 default_server;
root /usr/local/app;
access_log /var/log/nginx/app.access_log main;
error_log /var/log/nginx/app.error_log;
location / {
proxy_pass http://app/;
proxy_http_version 1.1;
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-demo-nginx
spec:
replicas: 1
selector:
matchLabels:
app: my-demo-nginx
template:
metadata:
labels:
app: my-demo-nginx
spec:
containers:
- name: my-demo-nginx
image: 192.168.100.20:8888/my-demo/nginx
command: ["/bin/bash", "-c", "--"]
args: ["while true; do sleep 30; done;"]
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/nginx.conf # mount nginx-conf volumn to /etc/nginx
#readOnly: true
#name: nginx-conf
#name: my-demo-nginx
name: nginx
subPath: nginx.conf
- mountPath: /var/log/nginx
name: log
volumes:
- name: nginx
configMap:
name: nginx-conf # place ConfigMap `nginx-conf` on /etc/nginx
items:
- key: nginx.conf
path: nginx.conf
- key: virtualhost.conf
path: conf.d/virtualhost.conf # dig directory
- name: log
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service #定義service名稱為nginx-service
labels:
app: nginx-service #為service打上app標籤
spec:
type: NodePort #使用NodePort方式開通,在每個Node上分配一個端口作為外部訪問入口
#type: LoadBalancer #工作在特定的Cloud Provider上,例如Google Cloud,AWS,OpenStack
#type: ClusterIP #默認,分配一個集羣內部可以訪問的虛擬IP(VIP)
ports:
- port: 8000 #port是k8s集羣內部訪問service的端口,即通過clusterIP: port可以訪問到某個service
targetPort: 80 #targetPort是pod的端口,從port和nodePort來的流量經過kube-proxy流入到後端pod的targetPort上,最後進入容器
nodePort: 32500 #nodePort是外部訪問k8s集羣中service的端口,通過nodeIP: nodePort可以從外部訪問到某個service
selector:
app: my-nginx