舵图无法正确升级部署

如何解决舵图无法正确升级部署

我使用自定义Helm图表将托管在GitLab上的项目部署到Google Kubernetes集群。它工作顺利。我在以下情况下有问题。

  1. 即使构建映像是新的,掌舵图也不会升级Kubernetes上的部署。我的理解是,它会将Kubernetes上部署的映像的SHA256摘要与构建阶段中构建的新映像进行比较,如果存在差异,它将使用新映像启动一个新容器并终止旧容器。但这并没有做到。最初,我怀疑图像pullPolicy可能存在问题,因为它设置为IfNotPresent。我尝试将其设置为Always,但仍然无法正常工作。
  2. 如果图像提取策略设置为Always,并且由于故障或其他原因导致pod重新启动,则它会给出imagePullBackOff错误。我检查了kubernetes命名空间中存在的机密,它具有dockerconfigjson机密,但仍然没有给出授权错误。当我再次使用新的CI / CD管道进行部署时,它开始工作。

错误日志

Warning  Failed     19m (x4 over 20m)   kubelet Failed to pull image "gitlab.digital-worx.de:5050/asvin/asvin-frontend/master:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://gitlab.digital-worx.de:5050/v2/asvin/asvin-frontend/master/manifests/latest: unauthorized: HTTP Basic: Access denied
Warning  Failed     19m (x4 over 20m)   kubelet            Error: ErrImagePull
Warning  Failed     25s (x87 over 20m)  kubelet            Error: ImagePullBackOff

deployement.yaml

{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
  name: {{ template "name" . }}
  annotations:
    {{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
    {{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
  labels:
    app: {{ template "name" . }}
    track: "{{ .Values.application.track }}"
    tier: "{{ .Values.application.tier }}"
    chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
    release: {{ .Release.Name }}
    service: {{ .Values.ranking.service.name }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
  selector:
    matchLabels:
      app: {{ template "name" . }}
      track: "{{ .Values.application.track }}"
      tier: "{{ .Values.application.tier }}"
      release: {{ .Release.Name }}
      service: {{ .Values.ranking.service.name }}
{{- end }}
  replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
  strategy:
    type: {{ .Values.strategyType | quote }}
{{- end }}
  template:
    metadata:
      annotations:
        checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
        {{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
        {{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
      labels:
        app: {{ template "name" . }}
        track: "{{ .Values.application.track }}"
        tier: "{{ .Values.application.tier }}"
        release: {{ .Release.Name }}
        service: {{ .Values.ranking.service.name }}
    
    spec:
      volumes:
    {{- if .Values.ranking.configmap }}
    {{end}}
      imagePullSecrets:
{{ toYaml .Values.ranking.image.secrets | indent 10 }}
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.ranking.image.repository }}:{{ .Values.ranking.image.tag }}"
        imagePullPolicy: {{ .Values.ranking.image.pullPolicy }}
    {{- if .Values.application.secretName }}
        envFrom:
        - secretRef:
            name: {{ .Values.application.secretName }}
        {{- end }}
        env:
        - name: INDEXER_URL
          valueFrom:
            secretKeyRef:
              name: {{.Release.Name}}-secret
              key: INDEXER_URL
        volumeMounts:
        ports:
        - name: "{{ .Values.ranking.service.name }}"
          containerPort: {{ .Values.ranking.service.internalPort }}
        livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
          httpGet:
            path: {{ .Values.livenessProbe.path }}
            scheme: {{ .Values.livenessProbe.scheme }}
            port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
          tcpSocket:
            port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
          exec:
            command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
          initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
          timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
        readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
          httpGet:
            path: {{ .Values.readinessProbe.path }}
            scheme: {{ .Values.readinessProbe.scheme }}
            port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
          tcpSocket:
            port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
          exec:
            command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
          initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
          timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
        resources:
{{ toYaml .Values.resources | indent 12 }}
      restartPolicy: Always
      enableServiceLinks: false
status: {}
{{- end -}}

values.yaml

# Default values for chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
strategyType:
enableSelector:
deploymentApiVersion: apps/v1
ranking:
  name: ranking
  image:
    repository: gitlab.iotcrawler.net:4567/ranking/ranking/master
    tag: latest
    pullPolicy: Always
    secrets:
    - name: gitlab-registry-demonstrator-murcia-parking-iotcrawler
  service:
    enabled: true
    annotations: {}
    name: ranking
    type: ClusterIP
    additionalHosts:
    commonName:
    externalPort: 3003
    internalPort: 3003
    production:
      url: parking.ranking.iotcrawler.eu
    staging:
      url: staging.parking.ranking.iotcrawler.eu
  configmap: true
podAnnotations: {}
application:
  track: latest
  tier: web
  migrateCommand:
  initializeCommand:
  secretName:
  secretChecksum:
hpa:
  enabled: false
  minReplicas: 1
  maxReplicas: 5
  targetCPUUtilizationPercentage: 80
gitlab:
  app:
  env:
  envName:
  envURL:
ingress:
  enabled: true
  url: 
  tls:
    enabled: true
    secretName: ""
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "nginx"
  modSecurity:
    enabled: false
    secRuleEngine: "DetectionOnly"
    # secRules:
    #   - variable: ""
    #     operator: ""
    #     action: ""
prometheus:
  metrics: false
livenessProbe:
  path: "/"
  initialDelaySeconds: 15
  timeoutSeconds: 15
  scheme: "HTTP"
  probeType: "httpGet"
readinessProbe:
  path: "/"
  initialDelaySeconds: 5
  timeoutSeconds: 3
  scheme: "HTTP"
  probeType: "httpGet"
postgresql:
  enabled: true
  managed: false
  managedClassSelector:
    #   matchLabels:
    #     stack: gitlab (This is an example. The labels should match the labels on the CloudSQLInstanceClass)

resources:
#  limits:
#    cpu: 100m
#    memory: 128Mi
  requests:
#    cpu: 100m
#    memory: 128Mi

## Configure PodDisruptionBudget
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
#
podDisruptionBudget:
  enabled: false
  # minAvailable: 1
  maxUnavailable: 1

## Configure NetworkPolicy
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
#
networkPolicy:
  enabled: false
  spec:
    podSelector:
      matchLabels: {}
    ingress:
    - from:
      - podSelector:
          matchLabels: {}
      - namespaceSelector:
          matchLabels:
            app.gitlab.com/managed_by: gitlab

workers: {}
  # worker:
  #   replicaCount: 1
  #   terminationGracePeriodSeconds: 60
  #   command:
  #   - /bin/herokuish
  #   - procfile
  #   - start
  #   - worker
  #   preStopCommand:
  #   - /bin/herokuish
  #   - procfile
  #   - start
  #   - stop_worker

解决方法

除非在upgrade命令中指定,头盔升级不会重新创建容器。

您可以尝试使用头盔2 --force --recreate-pods来强制重新创建吊舱。

您可以像头盔2这样尝试

helm upgrade release_name chartname --namespace namespace --install   --force --recreate-pods  

但是,问题是您必须面对停机时间。有关详细信息,请参阅此answer

,

我解决了两个问题。我必须研究Gitlab的Auto Devops功能。它使用auto deploy image创建dockerjson secret并在Kubernetes集群上安装/升级自定义头盔图表。它使用helm upgrade命令来升级/安装图表。在命令中,它还将字符串image.tag设置为CI_APPLICATION_TAG:-$CI_COMMIT_SHA$CI_COMMIT_TAG

  1. 在部署文件中,我使用了image.tag值,如下所示。

    图片:“ {{.Values.image.repository}}:{{.Values.image.tag}}”

  2. 我通过创建一个docker-registry机密解决了第二个问题。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 <select id="xxx"> SELECT di.id, di.name, di.work_type, di.updated... <where> <if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 <property name="dynamic.classpath" value="tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-