Skip to content

feat: add a way to sync Kubernetes manifests in Omni#2231

Open
Unix4ever wants to merge 1 commit intosiderolabs:mainfrom
Unix4ever:manifests
Open

feat: add a way to sync Kubernetes manifests in Omni#2231
Unix4ever wants to merge 1 commit intosiderolabs:mainfrom
Unix4ever:manifests

Conversation

@Unix4ever
Copy link
Member

Manifests support two modes:

  • FULL - Omni will keep the manifest in sync always.
  • ONE_TIME - Omni will apply the manifest only if it doesn't exist. If the manifest is removed by hand and then changed in Omni it will be applied too.

Manifests are applied using service side apply, Omni now has two inventories: omni-internal-inventory and omni-user-inventory:

  • User inventory will be used for user managed manifests.
  • Internal one will be used for the manifests which are created by Omni controllers (workloadproxy, advanced healtcheck service).

Manifests also support setting namespace to all namespaced resources. It might be useful for the huge manifest files which are supplied without the namespace (similar to kubectl apply -n namespace -f manifest.yaml).

Copy link
Member

@Orzelius Orzelius left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not as much a code review as just a bunch of questions from my part

@github-project-automation github-project-automation bot moved this from In Review to Approved in Planning Feb 4, 2026
@rothgar
Copy link
Member

rothgar commented Feb 4, 2026

How is someone intended to use this initially? Is this only part of cluster templates or can it be added to any cluster in Omni?

@Unix4ever
Copy link
Member Author

How is someone intended to use this initially? Is this only part of cluster templates or can it be added to any cluster in Omni?

I'm going to make it part of the cluster templates in the next PR.
Also will move workloadproxy manifests to this system.

@Unix4ever
Copy link
Member Author

Unix4ever commented Feb 6, 2026

Pretty much rewrote the controller. Made the status updates more frequent, fixed more issues there:

  • mappers for the LoadbalancerStatus and ClusterStatus were incorrect.
  • reduced the prune/rollout timeouts to make the controller retry and update the status.

The status now contains more phases as well as the information about the resource being created.

Example:

metadata:
    namespace: default
    type: ClusterKubernetesManifestsStatuses.omni.sidero.dev
    id: talos-default
    version: 34
    owner: ClusterKubernetesManifestsStatusController
    phase: running
    created: 2026-02-06T12:38:46Z
    updated: 2026-02-06T13:20:00Z
spec:
    manifests:
        default_nginx-deployment_apps_Deployment:
            phase: 3
            lasterror: ""
            mode: 1
            kind: Deployment
            name: nginx-deployment
            namespace: default
    errors: {}
    outofsync: 0
    total: 1

Manifests support two modes:
- `FULL` - Omni will keep the manifest in sync always.
- `ONE_TIME` - Omni will apply the manifest only if it doesn't exist. If the manifest is removed by hand and then changed in Omni it will be applied too.

Manifests are applied using service side apply, Omni now has two inventories: `omni-internal-inventory` and `omni-user-inventory`:

- User inventory will be used for user managed manifests.
- Internal one will be used for the manifests which are created by Omni controllers (workloadproxy, advanced healtcheck service).

Manifests also support setting namespace to all namespaced resources. It might be useful for the huge manifest files which are supplied without the namespace (similar to `kubectl apply -n namespace -f manifest.yaml`).

Signed-off-by: Artem Chernyshev <artem.chernyshev@talos-systems.com>

compressedData := doCompress(data, config)

x.Data = ""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we compress them unconditionally? Our old compressing resources were done kinda that way for kind of backwards compatibility. Maybe we can just do this simpler.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has more than that. It also doesn't compress if the manifest size is small. So I thought it kinda makes sense to use the same approach here.

string namespace = 6;
}

map<string, ManifestStatus> manifests = 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious, does this mean there is no order of apply between manifests? I guess it'd probably work anyway, our code would keep trying to apply them, but normally there can be an order in k8s manifests - an example is creating a CRD and a CR. If so, we could maybe turn these into slices instead?

Copy link
Member Author

@Unix4ever Unix4ever Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the status resource.

While the manifests are ordered: separate KubernetesManifest resources are ordered by COSI using the ID, and it's possible to put many manifests to a single resource delimiting by ---, then it's a slice.


// LabelSystemManifest marks the manifest as the system manifest, so it shouldn't be editable by the user.
// tsgen:LabelSystemManifest
LabelSystemManifest = SystemLabelPrefix + "system-manifest"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Show the system manifest flag inside the ClusterKubernetesManifestsStatus.


if len(oneTimeManifests) > 0 {
if err = ctrl.syncOneTime(ctx, logger, oneTimeManifests, client, clusterKubernetesManifestsStatus); err != nil {
errs = errors.Join(errs, err)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrap the error to add more context


for inventory, objects := range manifestsByInventory {
if err = ctrl.sync(ctx, logger, objects, inventory, client, clusterKubernetesManifestsStatus); err != nil {
errs = errors.Join(errs, err)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrap the error to add more context

KubernetesManifestSpec.Mode mode = 3;
string kind = 4;
string name = 5;
string namespace = 6;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add last modified date.

Comment on lines +619 to +620
return apierrors.IsInvalid(err) || apierrors.IsBadRequest(err) || apierrors.IsForbidden(err) ||
apierrors.IsRequestEntityTooLargeError(err) || webhookError(err) || validationError(err) || strings.Contains(err.Error(), "reconcile failed")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider making it less restricted


//nolint:exhaustive
switch result.Type {
case event.PruneType:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try to use inventory to track prune/delete events.

continue
}

_, err = dr.Create(ctx, manifest, v1.CreateOptions{FieldManager: constants.KubernetesFieldManagerName})
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
_, err = dr.Create(ctx, manifest, v1.CreateOptions{FieldManager: constants.KubernetesFieldManagerName})
_, err = dr.Apply(ctx, manifest, v1.CreateOptions{FieldManager: constants.KubernetesFieldManagerName})

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Approved

Development

Successfully merging this pull request may close these issues.

6 participants