如何解决舵图无法正确升级部署
我使用自定义Helm图表将托管在GitLab上的项目部署到Google Kubernetes集群。它工作顺利。我在以下情况下有问题。
- 即使构建映像是新的,掌舵图也不会升级Kubernetes上的部署。我的理解是,它会将Kubernetes上部署的映像的SHA256摘要与构建阶段中构建的新映像进行比较,如果存在差异,它将使用新映像启动一个新容器并终止旧容器。但这并没有做到。最初,我怀疑图像
pullPolicy
可能存在问题,因为它设置为IfNotPresent
。我尝试将其设置为Always
,但仍然无法正常工作。 - 如果图像提取策略设置为
Always
,并且由于故障或其他原因导致pod重新启动,则它会给出imagePullBackOff
错误。我检查了kubernetes命名空间中存在的机密,它具有dockerconfigjson
机密,但仍然没有给出授权错误。当我再次使用新的CI / CD管道进行部署时,它开始工作。
错误日志
Warning Failed 19m (x4 over 20m) kubelet Failed to pull image "gitlab.digital-worx.de:5050/asvin/asvin-frontend/master:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://gitlab.digital-worx.de:5050/v2/asvin/asvin-frontend/master/manifests/latest: unauthorized: HTTP Basic: Access denied
Warning Failed 19m (x4 over 20m) kubelet Error: ErrImagePull
Warning Failed 25s (x87 over 20m) kubelet Error: ImagePullBackOff
deployement.yaml
{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
name: {{ template "name" . }}
annotations:
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
labels:
app: {{ template "name" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name }}
service: {{ .Values.ranking.service.name }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
selector:
matchLabels:
app: {{ template "name" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
service: {{ .Values.ranking.service.name }}
{{- end }}
replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
strategy:
type: {{ .Values.strategyType | quote }}
{{- end }}
template:
metadata:
annotations:
checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "name" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
service: {{ .Values.ranking.service.name }}
spec:
volumes:
{{- if .Values.ranking.configmap }}
{{end}}
imagePullSecrets:
{{ toYaml .Values.ranking.image.secrets | indent 10 }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.ranking.image.repository }}:{{ .Values.ranking.image.tag }}"
imagePullPolicy: {{ .Values.ranking.image.pullPolicy }}
{{- if .Values.application.secretName }}
envFrom:
- secretRef:
name: {{ .Values.application.secretName }}
{{- end }}
env:
- name: INDEXER_URL
valueFrom:
secretKeyRef:
name: {{.Release.Name}}-secret
key: INDEXER_URL
volumeMounts:
ports:
- name: "{{ .Values.ranking.service.name }}"
containerPort: {{ .Values.ranking.service.internalPort }}
livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.livenessProbe.path }}
scheme: {{ .Values.livenessProbe.scheme }}
port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.readinessProbe.path }}
scheme: {{ .Values.readinessProbe.scheme }}
port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
restartPolicy: Always
enableServiceLinks: false
status: {}
{{- end -}}
values.yaml
# Default values for chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
strategyType:
enableSelector:
deploymentApiVersion: apps/v1
ranking:
name: ranking
image:
repository: gitlab.iotcrawler.net:4567/ranking/ranking/master
tag: latest
pullPolicy: Always
secrets:
- name: gitlab-registry-demonstrator-murcia-parking-iotcrawler
service:
enabled: true
annotations: {}
name: ranking
type: ClusterIP
additionalHosts:
commonName:
externalPort: 3003
internalPort: 3003
production:
url: parking.ranking.iotcrawler.eu
staging:
url: staging.parking.ranking.iotcrawler.eu
configmap: true
podAnnotations: {}
application:
track: latest
tier: web
migrateCommand:
initializeCommand:
secretName:
secretChecksum:
hpa:
enabled: false
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 80
gitlab:
app:
env:
envName:
envURL:
ingress:
enabled: true
url:
tls:
enabled: true
secretName: ""
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
modSecurity:
enabled: false
secRuleEngine: "DetectionOnly"
# secRules:
# - variable: ""
# operator: ""
# action: ""
prometheus:
metrics: false
livenessProbe:
path: "/"
initialDelaySeconds: 15
timeoutSeconds: 15
scheme: "HTTP"
probeType: "httpGet"
readinessProbe:
path: "/"
initialDelaySeconds: 5
timeoutSeconds: 3
scheme: "HTTP"
probeType: "httpGet"
postgresql:
enabled: true
managed: false
managedClassSelector:
# matchLabels:
# stack: gitlab (This is an example. The labels should match the labels on the CloudSQLInstanceClass)
resources:
# limits:
# cpu: 100m
# memory: 128Mi
requests:
# cpu: 100m
# memory: 128Mi
## Configure PodDisruptionBudget
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
#
podDisruptionBudget:
enabled: false
# minAvailable: 1
maxUnavailable: 1
## Configure NetworkPolicy
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
#
networkPolicy:
enabled: false
spec:
podSelector:
matchLabels: {}
ingress:
- from:
- podSelector:
matchLabels: {}
- namespaceSelector:
matchLabels:
app.gitlab.com/managed_by: gitlab
workers: {}
# worker:
# replicaCount: 1
# terminationGracePeriodSeconds: 60
# command:
# - /bin/herokuish
# - procfile
# - start
# - worker
# preStopCommand:
# - /bin/herokuish
# - procfile
# - start
# - stop_worker
解决方法
除非在upgrade命令中指定,头盔升级不会重新创建容器。
您可以尝试使用头盔2 --force --recreate-pods
来强制重新创建吊舱。
您可以像头盔2这样尝试
helm upgrade release_name chartname --namespace namespace --install --force --recreate-pods
但是,问题是您必须面对停机时间。有关详细信息,请参阅此answer。
,我解决了两个问题。我必须研究Gitlab的Auto Devops功能。它使用auto deploy image创建dockerjson secret并在Kubernetes集群上安装/升级自定义头盔图表。它使用helm upgrade命令来升级/安装图表。在命令中,它还将字符串image.tag设置为CI_APPLICATION_TAG:-$CI_COMMIT_SHA$CI_COMMIT_TAG
。
-
在部署文件中,我使用了image.tag值,如下所示。
图片:“ {{.Values.image.repository}}:{{.Values.image.tag}}”
-
我通过创建一个docker-registry机密解决了第二个问题。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。