Skip to content

Feature: Chart should use tpl on values where possible #52

@mohag

Description

@mohag

Issue submitter TODO list

  • I've looked up my issue in FAQ
  • I've searched for an already existing issues here (legacy) and here
  • I've tried installing latest charts and the issue still persists there
  • I'm running a supported version of the application & chart which is listed here

Describe the bug (actual behavior)

When using the chart as a dependency from another chart (one deploying a Strimzi cluster with a UI for it in my case), it is often useful to template values in the chart's values.yaml.

For this to work, those values need to have the tpl function run on it though, otherwise the template ends up in the output and results in errors when the chart is deployed.

Expected behavior

Using templates in most places in the dependant chart's values.yaml to pass settings to the kafka-ui chart results in errors.

Your installation details

Chart version 1.5.1.

Steps to reproduce

Pass template strings in the values.yaml.

Example:
test-kafka-ui-values.yaml

testBase: test-string
yamlApplicationSecret:
  name: '{{ .Values.testBase }}-kafka-ui-config'
  key: config.yaml

envs:
  secretMappings:
    KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG:
      name: '{{ .Values.testBase }}-kafka-ui'
      keyName: sasl.jaas.config
env:
  - name: KAFKA_CLUSTERS_0_NAME
    value: '{{ .Values.testBase }}'
  - name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
    value: '{{ .Values.testBase }}-kafka:9092'
  - name: KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL
    value: SASL_PLAINTEXT
  - name: KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM
    value: SCRAM-SHA-512
  - name: KAFKA_CLUSTERS_0_PROPERTIES_PROTOCOL
    value: SASL
ingress:
  enabled: false
  host: render-test.example.com
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
  ingressClassName: "nginx"
  tls:
    enabled: true
    secretName: '{{ .Values.testBase }}-kafka-public-tls'

This will give an error currently: (--debug allows checking the resultant YAML)
helm template test-render charts/kafka-ui -f test-kafka-ui-values.yaml --debug

Screenshots

No response

Logs

For the example:

---
# Source: kafka-ui/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: test-render-kafka-ui
  namespace: default
  labels:
    helm.sh/chart: kafka-ui-1.5.1
    app.kubernetes.io/name: kafka-ui
    app.kubernetes.io/instance: test-render
    app.kubernetes.io/version: "v1.3.0"
    app.kubernetes.io/managed-by: Helm

---
# Source: kafka-ui/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: test-render-kafka-ui
  namespace: default
  labels:
    helm.sh/chart: kafka-ui-1.5.1
    app.kubernetes.io/name: kafka-ui
    app.kubernetes.io/instance: test-render
    app.kubernetes.io/version: "v1.3.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: kafka-ui
    app.kubernetes.io/instance: test-render

---
# Source: kafka-ui/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-render-kafka-ui
  namespace: default
  labels:
    helm.sh/chart: kafka-ui-1.5.1
    app.kubernetes.io/name: kafka-ui
    app.kubernetes.io/instance: test-render
    app.kubernetes.io/version: "v1.3.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: kafka-ui
      app.kubernetes.io/instance: test-render
  template:
    metadata:
      annotations:
        checksum/config: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
        checksum/configFromValues: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
        checksum/secret: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
      labels:
        app.kubernetes.io/name: kafka-ui
        app.kubernetes.io/instance: test-render
    spec:
      serviceAccountName: test-render-kafka-ui
      securityContext:
        {}
      containers:
        - name: kafka-ui
          securityContext:
            {}
          image: ghcr.io/kafbat/kafka-ui:v1.3.0
          imagePullPolicy: IfNotPresent
          env:
            - name: KAFKA_CLUSTERS_0_NAME
              value: '{{ .Values.testBase }}'
            - name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
              value: '{{ .Values.testBase }}-kafka:9092'
            - name: KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL
              value: SASL_PLAINTEXT
            - name: KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM
              value: SCRAM-SHA-512
            - name: KAFKA_CLUSTERS_0_PROPERTIES_PROTOCOL
              value: SASL
            - name: KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG
              valueFrom:
                secretKeyRef:
                  name: {{ .Values.testBase }}-kafka-ui
                  key: sasl.jaas.config
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /actuator/health
              port: http
            initialDelaySeconds: 60
            periodSeconds: 30
            timeoutSeconds: 10
          readinessProbe:
            httpGet:
              path: /actuator/health
              port: http
            initialDelaySeconds: 60
            periodSeconds: 30
            timeoutSeconds: 10
          resources:
            {}
Error: YAML parse error on kafka-ui/templates/deployment.yaml: error converting YAML to JSON: yaml: line 50: did not find expected key
helm.go:86: 2025-08-29 12:56:12.3143595 +0200 SAST m=+0.163970601 [debug] error converting YAML to JSON: yaml: line 50: did not find expected key
YAML parse error on kafka-ui/templates/deployment.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
        helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:144
helm.sh/helm/v3/pkg/releaseutil.SortManifests
        helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:104
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
        helm.sh/helm/v3/pkg/action/action.go:168
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
        helm.sh/helm/v3/pkg/action/install.go:316
main.runInstall
        helm.sh/helm/v3/cmd/helm/install.go:317
main.newTemplateCmd.func2
        helm.sh/helm/v3/cmd/helm/template.go:95
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/[email protected]/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/[email protected]/command.go:1117
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/[email protected]/command.go:1041
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:85
runtime.main
        runtime/proc.go:272
runtime.goexit
        runtime/asm_amd64.s:1700

Additional context

#6 templated the config, but it would be useful to have this in many more places.

I want to submit a PR for the parts that affect me, but I think having an issue for this would be useful for others as well.

Metadata

Metadata

Assignees

No one assigned

    Labels

    status/triageIssues pending maintainers triage

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions