Skip to content

Add PowerVS support#303

Open
Karthik-K-N wants to merge 1 commit intoIBM-Cloud:release-1.34from
Karthik-K-N:powervs-support
Open

Add PowerVS support#303
Karthik-K-N wants to merge 1 commit intoIBM-Cloud:release-1.34from
Karthik-K-N:powervs-support

Conversation

@Karthik-K-N
Copy link
Copy Markdown
Contributor

This PR adds support for IBM Cloud PowerVS to cloud-provider-ibm

Only change is to initialise a Node by fetching details of corresponding VM from PowerVS workspace rather than from VPC. Inorder to facilitate CCM to configure for PowerVS added few configuration variables.

The CCM workflow remains same for VPC with no chagnes.

In Order to configure the CCM for PowerVS we need to set few additional configurtions like

powerVSCloudInstanceID = <PowerVS_workspace_ID>
powerVSRegion = <PowerVS_Region>
powerVSZone = <PowerVS_Zone>

We already made similar changes on top of cloud-provider-powervs in openshift repo called cloud-provider-powervs and its working perfectly fine from server releases

Below are the sample outputs of running CCM with PowerVS support on a cluster

karthikkn@Karthiks-MacBook-Pro cluster-api-provider-ibmcloud % kubectl get nodes
NAME                                STATUS     ROLES           AGE     VERSION
ibm-powervs-1-control-plane-9m7m9   NotReady   control-plane   16m     v1.33.0
ibm-powervs-1-control-plane-f2w9k   NotReady   control-plane   2m22s   v1.33.0
ibm-powervs-1-control-plane-kljgw   NotReady   control-plane   8m17s   v1.33.0
ibm-powervs-1-md-0-x225t-t6m7c      NotReady   <none>          11m     v1.33.0



karthikkn@Karthiks-MacBook-Pro cluster-api-provider-ibmcloud % kubectl -n kube-system get pods
NAME                                                        READY   STATUS    RESTARTS   AGE
coredns-674b8bbfcf-67lw4                                    0/1     Pending   0          17m
coredns-674b8bbfcf-bs2cg                                    0/1     Pending   0          17m
etcd-ibm-powervs-1-control-plane-9m7m9                      1/1     Running   0          17m
etcd-ibm-powervs-1-control-plane-f2w9k                      1/1     Running   0          2m52s
etcd-ibm-powervs-1-control-plane-kljgw                      1/1     Running   0          8m52s
ibmpowervs-cloud-controller-manager-dlvws                   1/1     Running   0          17m
ibmpowervs-cloud-controller-manager-lskzp                   1/1     Running   0          8m54s
ibmpowervs-cloud-controller-manager-n6zqz                   1/1     Running   0          3m


karthikkn@Karthiks-MacBook-Pro cluster-api-provider-ibmcloud % kubectl -n kube-system logs ibmpowervs-cloud-controller-manager-dlvws

I0910 14:12:41.286416       1 flags.go:64] FLAG: --allocate-node-cidrs="false"
I0910 14:12:41.286590       1 flags.go:64] FLAG: --allow-untagged-cloud="false"
I0910 14:12:41.286598       1 flags.go:64] FLAG: --authentication-kubeconfig=""
I0910 14:12:41.286611       1 flags.go:64] FLAG: --authentication-skip-lookup="false"
I0910 14:12:41.286620       1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s"
I0910 14:12:41.286633       1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false"
I0910 14:12:41.286641       1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]"
I0910 14:12:41.286661       1 flags.go:64] FLAG: --authorization-kubeconfig=""
I0910 14:12:41.286671       1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
I0910 14:12:41.286682       1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
I0910 14:12:41.286691       1 flags.go:64] FLAG: --bind-address="0.0.0.0"
I0910 14:12:41.286705       1 flags.go:64] FLAG: --cert-dir=""
I0910 14:12:41.286714       1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator"
I0910 14:12:41.286724       1 flags.go:64] FLAG: --client-ca-file=""
I0910 14:12:41.286733       1 flags.go:64] FLAG: --cloud-config="/etc/cloud/ibmpowervs.conf"
I0910 14:12:41.286744       1 flags.go:64] FLAG: --cloud-provider="ibm"
I0910 14:12:41.286756       1 flags.go:64] FLAG: --cluster-cidr=""
I0910 14:12:41.286769       1 flags.go:64] FLAG: --cluster-name="kubernetes"
I0910 14:12:41.286783       1 flags.go:64] FLAG: --concurrent-node-syncs="1"
I0910 14:12:41.286795       1 flags.go:64] FLAG: --concurrent-service-syncs="1"
I0910 14:12:41.286807       1 flags.go:64] FLAG: --configure-cloud-routes="true"
I0910 14:12:41.286816       1 flags.go:64] FLAG: --contention-profiling="false"
I0910 14:12:41.286824       1 flags.go:64] FLAG: --controller-start-interval="0s"
I0910 14:12:41.286832       1 flags.go:64] FLAG: --controllers="[*]"
I0910 14:12:41.286842       1 flags.go:64] FLAG: --disable-http2-serving="false"
I0910 14:12:41.286851       1 flags.go:64] FLAG: --enable-leader-migration="false"
I0910 14:12:41.286862       1 flags.go:64] FLAG: --external-cloud-volume-plugin=""
I0910 14:12:41.286870       1 flags.go:64] FLAG: --feature-gates=""
I0910 14:12:41.286883       1 flags.go:64] FLAG: --help="false"
I0910 14:12:41.286891       1 flags.go:64] FLAG: --http2-max-streams-per-connection="0"
I0910 14:12:41.286901       1 flags.go:64] FLAG: --kube-api-burst="30"
I0910 14:12:41.286908       1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0910 14:12:41.286918       1 flags.go:64] FLAG: --kube-api-qps="20"
I0910 14:12:41.286929       1 flags.go:64] FLAG: --kubeconfig=""
I0910 14:12:41.286938       1 flags.go:64] FLAG: --leader-elect="true"
I0910 14:12:41.286946       1 flags.go:64] FLAG: --leader-elect-lease-duration="15s"
I0910 14:12:41.286956       1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s"
I0910 14:12:41.286966       1 flags.go:64] FLAG: --leader-elect-resource-lock="leases"
I0910 14:12:41.286976       1 flags.go:64] FLAG: --leader-elect-resource-name="cloud-controller-manager"
I0910 14:12:41.286984       1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system"
I0910 14:12:41.286992       1 flags.go:64] FLAG: --leader-elect-retry-period="2s"
I0910 14:12:41.287002       1 flags.go:64] FLAG: --leader-migration-config=""
I0910 14:12:41.287012       1 flags.go:64] FLAG: --log-flush-frequency="5s"
I0910 14:12:41.287024       1 flags.go:64] FLAG: --master=""
I0910 14:12:41.287033       1 flags.go:64] FLAG: --min-resync-period="12h0m0s"
I0910 14:12:41.287041       1 flags.go:64] FLAG: --node-monitor-period="5s"
I0910 14:12:41.287049       1 flags.go:64] FLAG: --node-status-update-frequency="5m0s"
I0910 14:12:41.287058       1 flags.go:64] FLAG: --node-sync-period="0s"
I0910 14:12:41.287067       1 flags.go:64] FLAG: --permit-address-sharing="false"
I0910 14:12:41.287076       1 flags.go:64] FLAG: --permit-port-sharing="false"
I0910 14:12:41.287085       1 flags.go:64] FLAG: --profiling="true"
I0910 14:12:41.287094       1 flags.go:64] FLAG: --requestheader-allowed-names="[]"
I0910 14:12:41.287106       1 flags.go:64] FLAG: --requestheader-client-ca-file=""
I0910 14:12:41.287119       1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]"
I0910 14:12:41.287130       1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]"
I0910 14:12:41.287143       1 flags.go:64] FLAG: --requestheader-uid-headers="[]"
I0910 14:12:41.287153       1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]"
I0910 14:12:41.287166       1 flags.go:64] FLAG: --route-reconciliation-period="10s"
I0910 14:12:41.287176       1 flags.go:64] FLAG: --secure-port="10258"
I0910 14:12:41.287185       1 flags.go:64] FLAG: --tls-cert-file=""
I0910 14:12:41.287194       1 flags.go:64] FLAG: --tls-cipher-suites="[]"
I0910 14:12:41.287203       1 flags.go:64] FLAG: --tls-min-version=""
I0910 14:12:41.287212       1 flags.go:64] FLAG: --tls-private-key-file=""
I0910 14:12:41.287220       1 flags.go:64] FLAG: --tls-sni-cert-key="[]"
I0910 14:12:41.287231       1 flags.go:64] FLAG: --use-service-account-credentials="true"
I0910 14:12:41.287239       1 flags.go:64] FLAG: --v="2"
I0910 14:12:41.287248       1 flags.go:64] FLAG: --version="false"
I0910 14:12:41.287256       1 flags.go:64] FLAG: --vmodule=""
I0910 14:12:41.287264       1 flags.go:64] FLAG: --webhook-bind-address="0.0.0.0"
I0910 14:12:41.287273       1 flags.go:64] FLAG: --webhook-cert-dir=""
I0910 14:12:41.287281       1 flags.go:64] FLAG: --webhook-secure-port="10260"
I0910 14:12:41.287290       1 flags.go:64] FLAG: --webhook-tls-cert-file=""
I0910 14:12:41.287298       1 flags.go:64] FLAG: --webhook-tls-private-key-file=""
I0910 14:12:41.287306       1 flags.go:64] FLAG: --webhooks="[]"
I0910 14:12:42.772630       1 serving.go:386] Generated self-signed cert in-memory
I0910 14:12:43.327559       1 serving.go:386] Generated self-signed cert in-memory
W0910 14:12:43.327609       1 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0910 14:12:43.849714       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0910 14:12:43.849735       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0910 14:12:43.849744       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0910 14:12:43.849755       1 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0910 14:12:43.849766       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0910 14:12:43.857246       1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController
I0910 14:12:43.859494       1 ibm.go:330] RegisterCloudProvider(ibm, &{0xc0009354a0}, [/bin/ibm-cloud-controller-manager --v=2 --cloud-provider=ibm --cloud-config=/etc/cloud/ibmpowervs.conf --use-service-account-credentials=true])
W0910 14:12:43.859960       1 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0910 14:12:43.860251       1 ibm_metadata_service.go:83] MetadataService: created for provider: {ProviderID: InternalIP: ExternalIP: Region: Zone: InstanceType: ClusterID:ibm-powervs-1 AccountID:c265c8cefda241ca9c107adcbbacaa84 ProviderType:g2 G2WorkerServiceAccountID:c265c8cefda241ca9c107adcbbacaa84 G2VpcName: G2Credentials:/etc/ibm-secret/ibmcloud_api_key G2ResourceGroupName: G2VpcSubnetNames: G2EndpointOverride: IamEndpointOverride: RmEndpointOverride: IKSPrivateEndpointHostname: CloudCredentials: PowerVSEndpointOverride: RcEndpointOverride: PowerVSCloudInstanceID:10b1000b-da8d-4e18-ad1f-6b2a56a8c130 PowerVSCloudInstanceName: PowerVSRegion:osa PowerVSZone:osa21}
I0910 14:12:43.860350       1 ibm.go:304] Initialize VPC with cloud config: {ProviderID: InternalIP: ExternalIP: Region: Zone: InstanceType: ClusterID:ibm-powervs-1 AccountID:c265c8cefda241ca9c107adcbbacaa84 ProviderType:g2 G2WorkerServiceAccountID:c265c8cefda241ca9c107adcbbacaa84 G2VpcName: G2Credentials:/etc/ibm-secret/ibmcloud_api_key G2ResourceGroupName: G2VpcSubnetNames: G2EndpointOverride: IamEndpointOverride: RmEndpointOverride: IKSPrivateEndpointHostname: CloudCredentials: PowerVSEndpointOverride: RcEndpointOverride: PowerVSCloudInstanceID:10b1000b-da8d-4e18-ad1f-6b2a56a8c130 PowerVSCloudInstanceName: PowerVSRegion:osa PowerVSZone:osa21}
I0910 14:12:43.860382       1 ibm_vpc_cloud.go:107] Reading cloud credential from: /etc/ibm-secret/ibmcloud_api_key
W0910 14:12:43.860451       1 ibm.go:308] failed initializing VPC: missing required cloud configuration setting: region
I0910 14:12:43.860480       1 controllermanager.go:160] Version: v0.0.0-master+$Format:%H$
I0910 14:12:43.865477       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
I0910 14:12:43.865512       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0910 14:12:43.865514       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0910 14:12:43.865534       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
I0910 14:12:43.865547       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0910 14:12:43.865554       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0910 14:12:43.865872       1 tlsconfig.go:203] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1757513562\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1757513561\" (2025-09-10 13:12:41 +0000 UTC to 2026-09-10 13:12:41 +0000 UTC (now=2025-09-10 14:12:43.865830064 +0000 UTC))"
I0910 14:12:43.866338       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1757513563\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1757513563\" (2025-09-10 13:12:43 +0000 UTC to 2028-09-10 13:12:43 +0000 UTC (now=2025-09-10 14:12:43.866295029 +0000 UTC))"
I0910 14:12:43.866383       1 secure_serving.go:211] Serving securely on [::]:10258
I0910 14:12:43.866466       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I0910 14:12:43.866976       1 leaderelection.go:257] attempting to acquire leader lease kube-system/cloud-controller-manager...
I0910 14:12:43.867183       1 reflector.go:436] "Caches populated" type="*v1.ConfigMap" reflector="pkg/mod/k8s.io/client-go@v0.34.0/tools/cache/reflector.go:290"
I0910 14:12:43.867189       1 reflector.go:436] "Caches populated" type="*v1.ConfigMap" reflector="pkg/mod/k8s.io/client-go@v0.34.0/tools/cache/reflector.go:290"
I0910 14:12:43.867237       1 reflector.go:436] "Caches populated" type="*v1.ConfigMap" reflector="pkg/mod/k8s.io/client-go@v0.34.0/tools/cache/reflector.go:290"
I0910 14:12:43.875977       1 leaderelection.go:271] successfully acquired lease kube-system/cloud-controller-manager
I0910 14:12:43.876166       1 event.go:389] "Event occurred" object="kube-system/cloud-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="ibm-powervs-1-control-plane-9m7m9_165dc890-55aa-43ae-b3ee-403718067797 became leader"
I0910 14:12:43.878785       1 ibm.go:190] Initializing Informers
I0910 14:12:43.878849       1 ibm_vpc_cloud.go:291] Watch the cloud credential file: /etc/ibm-secret/ibmcloud_api_key
I0910 14:12:43.878942       1 controllermanager.go:310] Starting "service-lb-controller"
I0910 14:12:43.882950       1 ibm_task.go:69] Starting cloud task: cloud.ibm.com/cloud-provider-ibm/ibm.MonitorLoadBalancers
I0910 14:12:43.882996       1 controllermanager.go:329] Started "service-lb-controller"
I0910 14:12:43.883024       1 controllermanager.go:310] Starting "node-route-controller"
I0910 14:12:43.883027       1 ibm_task.go:90] Running cloud task: cloud.ibm.com/cloud-provider-ibm/ibm.MonitorLoadBalancers
W0910 14:12:43.883038       1 core.go:111] --configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes.
W0910 14:12:43.883054       1 controllermanager.go:317] Skipping "node-route-controller"
I0910 14:12:43.883072       1 controllermanager.go:310] Starting "cloud-node-controller"
I0910 14:12:43.883171       1 controller.go:235] Starting service controller
I0910 14:12:43.883195       1 shared_informer.go:349] "Waiting for caches to sync" controller="service"
I0910 14:12:43.887603       1 controllermanager.go:329] Started "cloud-node-controller"
I0910 14:12:43.887618       1 controllermanager.go:310] Starting "cloud-node-lifecycle-controller"
I0910 14:12:43.887722       1 node_controller.go:176] Sending events to api server.
I0910 14:12:43.887799       1 node_controller.go:185] Waiting for informer caches to sync
I0910 14:12:43.889304       1 controllermanager.go:329] Started "cloud-node-lifecycle-controller"
I0910 14:12:43.889434       1 node_lifecycle_controller.go:112] Sending events to api server
I0910 14:12:43.891510       1 reflector.go:436] "Caches populated" type="*v1.Service" reflector="pkg/mod/k8s.io/client-go@v0.34.0/tools/cache/reflector.go:290"
I0910 14:12:43.891556       1 reflector.go:436] "Caches populated" type="*v1.Node" reflector="pkg/mod/k8s.io/client-go@v0.34.0/tools/cache/reflector.go:290"
I0910 14:12:43.966577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0910 14:12:43.966623       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0910 14:12:43.966577       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
I0910 14:12:43.966941       1 tlsconfig.go:181] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2025-09-10 14:00:01 +0000 UTC to 2035-09-08 14:05:01 +0000 UTC (now=2025-09-10 14:12:43.966916166 +0000 UTC))"
I0910 14:12:43.966966       1 tlsconfig.go:181] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2025-09-10 14:00:01 +0000 UTC to 2035-09-08 14:05:01 +0000 UTC (now=2025-09-10 14:12:43.966952515 +0000 UTC))"
I0910 14:12:43.967269       1 tlsconfig.go:203] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1757513562\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1757513561\" (2025-09-10 13:12:41 +0000 UTC to 2026-09-10 13:12:41 +0000 UTC (now=2025-09-10 14:12:43.967253888 +0000 UTC))"
I0910 14:12:43.967557       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1757513563\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1757513563\" (2025-09-10 13:12:43 +0000 UTC to 2028-09-10 13:12:43 +0000 UTC (now=2025-09-10 14:12:43.967541107 +0000 UTC))"
I0910 14:12:43.983855       1 shared_informer.go:356] "Caches are synced" controller="service"
I0910 14:12:43.983914       1 controller.go:723] Syncing backends for all LB services.
I0910 14:12:43.983939       1 controller.go:727] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0910 14:12:43.988130       1 node_controller.go:429] Initializing node ibm-powervs-1-control-plane-9m7m9 with cloud provider
I0910 14:12:43.988236       1 ibm_metadata_service.go:130] MetadataService: node ibm-powervs-1-control-plane-9m7m9 not in cache (applyNetUnAvail:false, cni:)
I0910 14:12:43.988242       1 node_controller.go:271] Update 1 nodes status took 79.445µs.
W0910 14:12:43.990573       1 ibm_metadata_service.go:210] MetadataService: node ibm-powervs-1-control-plane-9m7m9 missing 5 label(s): ibm-cloud.kubernetes.io/internal-ip,ibm-cloud.kubernetes.io/worker-id,ibm-cloud.kubernetes.io/machine-type,ibm-cloud.kubernetes.io/zone,ibm-cloud.kubernetes.io/region
I0910 14:12:44.991121       1 ibm_metadata_service.go:222] Retrieving information for node=ibm-powervs-1-control-plane-9m7m9 from Power VS
I0910 14:12:51.712321       1 ibm_powervs_client.go:189] instance name: ibm-powervs-1-control-plane-9m7m9 id b87cde47-3f0f-4125-b4e9-417e93917ebd
I0910 14:12:56.150960       1 ibm_powervs_client.go:288] Node ibm-powervs-1-control-plane-9m7m9 worker id is b87cde47-3f0f-4125-b4e9-417e93917ebd
I0910 14:12:56.150988       1 ibm_powervs_client.go:291] Node ibm-powervs-1-control-plane-9m7m9 instance type is s922
I0910 14:12:56.151000       1 ibm_powervs_client.go:294] Node ibm-powervs-1-control-plane-9m7m9 region is osa
I0910 14:12:56.151012       1 ibm_powervs_client.go:297] Node ibm-powervs-1-control-plane-9m7m9 failureDomain is osa21
I0910 14:12:56.151037       1 ibm_powervs_client.go:355] Node ibm-powervs-1-control-plane-9m7m9 internal IP is 192.168.169.244
I0910 14:12:56.151056       1 ibm_powervs_client.go:356] Node ibm-powervs-1-control-plane-9m7m9 external IP is 163.68.77.244
I0910 14:12:56.151072       1 ibm_metadata_service.go:258] MetadataService: node ibm-powervs-1-control-plane-9m7m9 save to cache, metadata: {InternalIP:192.168.169.244 ExternalIP:163.68.77.244 WorkerID:b87cde47-3f0f-4125-b4e9-417e93917ebd InstanceType:s922 FailureDomain:osa21 Region:osa ProviderID:}
I0910 14:12:56.151111       1 node_controller.go:512] Adding node label from cloud provider: beta.kubernetes.io/instance-type=s922
I0910 14:12:56.151124       1 node_controller.go:513] Adding node label from cloud provider: node.kubernetes.io/instance-type=s922
I0910 14:12:56.151136       1 node_controller.go:524] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=osa21
I0910 14:12:56.151149       1 node_controller.go:525] Adding node label from cloud provider: topology.kubernetes.io/zone=osa21
I0910 14:12:56.151162       1 node_controller.go:535] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=osa
I0910 14:12:56.151177       1 node_controller.go:536] Adding node label from cloud provider: topology.kubernetes.io/region=osa
I0910 14:12:56.162865       1 controller.go:723] Syncing backends for all LB services.
I0910 14:12:56.162894       1 controller.go:727] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0910 14:12:56.173629       1 node_controller.go:474] Successfully initialized node ibm-powervs-1-control-plane-9m7m9 with cloud provider
I0910 14:12:56.173820       1 event.go:389] "Event occurred" object="ibm-powervs-1-control-plane-9m7m9" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
I0910 14:14:52.326593       1 controller.go:723] Syncing backends for all LB services.
I0910 14:14:52.326661       1 controller.go:727] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0910 14:14:52.326596       1 node_controller.go:429] Initializing node ibm-powervs-1-md-0-x225t-t6m7c with cloud provider
I0910 14:14:52.326739       1 ibm_metadata_service.go:130] MetadataService: node ibm-powervs-1-md-0-x225t-t6m7c not in cache (applyNetUnAvail:false, cni:)
W0910 14:14:52.329157       1 ibm_metadata_service.go:210] MetadataService: node ibm-powervs-1-md-0-x225t-t6m7c missing 5 label(s): ibm-cloud.kubernetes.io/internal-ip,ibm-cloud.kubernetes.io/worker-id,ibm-cloud.kubernetes.io/machine-type,ibm-cloud.kubernetes.io/zone,ibm-cloud.kubernetes.io/region
I0910 14:14:53.329657       1 ibm_metadata_service.go:222] Retrieving information for node=ibm-powervs-1-md-0-x225t-t6m7c from Power VS
I0910 14:14:57.161124       1 ibm_powervs_client.go:189] instance name: ibm-powervs-1-md-0-x225t-t6m7c id 1abd0340-a006-4a1f-935f-9ebf47e6ac89
I0910 14:15:01.442695       1 ibm_powervs_client.go:288] Node ibm-powervs-1-md-0-x225t-t6m7c worker id is 1abd0340-a006-4a1f-935f-9ebf47e6ac89
I0910 14:15:01.442721       1 ibm_powervs_client.go:291] Node ibm-powervs-1-md-0-x225t-t6m7c instance type is s922
I0910 14:15:01.442729       1 ibm_powervs_client.go:294] Node ibm-powervs-1-md-0-x225t-t6m7c region is osa
I0910 14:15:01.442737       1 ibm_powervs_client.go:297] Node ibm-powervs-1-md-0-x225t-t6m7c failureDomain is osa21
I0910 14:15:01.442755       1 ibm_powervs_client.go:355] Node ibm-powervs-1-md-0-x225t-t6m7c internal IP is 192.168.169.245
I0910 14:15:01.442767       1 ibm_powervs_client.go:356] Node ibm-powervs-1-md-0-x225t-t6m7c external IP is 163.68.77.245
I0910 14:15:01.442779       1 ibm_metadata_service.go:258] MetadataService: node ibm-powervs-1-md-0-x225t-t6m7c save to cache, metadata: {InternalIP:192.168.169.245 ExternalIP:163.68.77.245 WorkerID:1abd0340-a006-4a1f-935f-9ebf47e6ac89 InstanceType:s922 FailureDomain:osa21 Region:osa ProviderID:}
I0910 14:15:01.442807       1 node_controller.go:512] Adding node label from cloud provider: beta.kubernetes.io/instance-type=s922
I0910 14:15:01.442817       1 node_controller.go:513] Adding node label from cloud provider: node.kubernetes.io/instance-type=s922
I0910 14:15:01.442828       1 node_controller.go:524] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=osa21
I0910 14:15:01.442840       1 node_controller.go:525] Adding node label from cloud provider: topology.kubernetes.io/zone=osa21
I0910 14:15:01.442851       1 node_controller.go:535] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=osa
I0910 14:15:01.442863       1 node_controller.go:536] Adding node label from cloud provider: topology.kubernetes.io/region=osa
I0910 14:15:01.447986       1 controller.go:723] Syncing backends for all LB services.
I0910 14:15:01.448011       1 controller.go:727] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0910 14:15:01.453773       1 node_controller.go:474] Successfully initialized node ibm-powervs-1-md-0-x225t-t6m7c with cloud provider
I0910 14:15:01.453928       1 event.go:389] "Event occurred" object="ibm-powervs-1-md-0-x225t-t6m7c" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
I0910 14:17:43.883932       1 ibm_loadbalancer.go:148] Monitoring load balancers ...
I0910 14:17:43.886561       1 ibm_vpc_cloud.go:241] MonitorLB: No Load Balancers to monitor, returning
I0910 14:17:43.988720       1 node_controller.go:271] Update 2 nodes status took 78.953µs.
I0910 14:18:25.954067       1 node_controller.go:429] Initializing node ibm-powervs-1-control-plane-kljgw with cloud provider
I0910 14:18:25.954143       1 ibm_metadata_service.go:130] MetadataService: node ibm-powervs-1-control-plane-kljgw not in cache (applyNetUnAvail:false, cni:)
I0910 14:18:25.954076       1 controller.go:723] Syncing backends for all LB services.
I0910 14:18:25.954299       1 controller.go:727] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
W0910 14:18:25.957384       1 ibm_metadata_service.go:210] MetadataService: node ibm-powervs-1-control-plane-kljgw missing 5 label(s): ibm-cloud.kubernetes.io/internal-ip,ibm-cloud.kubernetes.io/worker-id,ibm-cloud.kubernetes.io/machine-type,ibm-cloud.kubernetes.io/zone,ibm-cloud.kubernetes.io/region
I0910 14:18:26.126214       1 controller.go:723] Syncing backends for all LB services.
I0910 14:18:26.126246       1 controller.go:727] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0910 14:18:26.957732       1 ibm_metadata_service.go:222] Retrieving information for node=ibm-powervs-1-control-plane-kljgw from Power VS
I0910 14:18:32.998399       1 ibm_powervs_client.go:189] instance name: ibm-powervs-1-control-plane-kljgw id a4e9d8a0-8e7e-4449-bfe8-b79569144056
I0910 14:18:37.649220       1 ibm_powervs_client.go:288] Node ibm-powervs-1-control-plane-kljgw worker id is a4e9d8a0-8e7e-4449-bfe8-b79569144056
I0910 14:18:37.649237       1 ibm_powervs_client.go:291] Node ibm-powervs-1-control-plane-kljgw instance type is s922
I0910 14:18:37.649244       1 ibm_powervs_client.go:294] Node ibm-powervs-1-control-plane-kljgw region is osa
I0910 14:18:37.649250       1 ibm_powervs_client.go:297] Node ibm-powervs-1-control-plane-kljgw failureDomain is osa21
I0910 14:18:37.649266       1 ibm_powervs_client.go:355] Node ibm-powervs-1-control-plane-kljgw internal IP is 192.168.169.246
I0910 14:18:37.649279       1 ibm_powervs_client.go:356] Node ibm-powervs-1-control-plane-kljgw external IP is 163.68.77.246
I0910 14:18:37.649293       1 ibm_metadata_service.go:258] MetadataService: node ibm-powervs-1-control-plane-kljgw save to cache, metadata: {InternalIP:192.168.169.246 ExternalIP:163.68.77.246 WorkerID:a4e9d8a0-8e7e-4449-bfe8-b79569144056 InstanceType:s922 FailureDomain:osa21 Region:osa ProviderID:}
I0910 14:18:37.649322       1 node_controller.go:512] Adding node label from cloud provider: beta.kubernetes.io/instance-type=s922
I0910 14:18:37.649334       1 node_controller.go:513] Adding node label from cloud provider: node.kubernetes.io/instance-type=s922
I0910 14:18:37.649347       1 node_controller.go:524] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=osa21
I0910 14:18:37.649362       1 node_controller.go:525] Adding node label from cloud provider: topology.kubernetes.io/zone=osa21
I0910 14:18:37.649378       1 node_controller.go:535] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=osa
I0910 14:18:37.649396       1 node_controller.go:536] Adding node label from cloud provider: topology.kubernetes.io/region=osa
I0910 14:18:37.655429       1 controller.go:723] Syncing backends for all LB services.
I0910 14:18:37.655457       1 controller.go:727] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0910 14:18:37.662582       1 node_controller.go:474] Successfully initialized node ibm-powervs-1-control-plane-kljgw with cloud provider
I0910 14:18:37.662724       1 event.go:389] "Event occurred" object="ibm-powervs-1-control-plane-kljgw" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
I0910 14:22:43.883969       1 ibm_loadbalancer.go:148] Monitoring load balancers ...
I0910 14:22:43.886750       1 ibm_vpc_cloud.go:241] MonitorLB: No Load Balancers to monitor, returning
I0910 14:22:43.988943       1 node_controller.go:271] Update 3 nodes status took 99.341µs.
I0910 14:24:20.295775       1 node_controller.go:429] Initializing node ibm-powervs-1-control-plane-f2w9k with cloud provider
I0910 14:24:20.295898       1 ibm_metadata_service.go:130] MetadataService: node ibm-powervs-1-control-plane-f2w9k not in cache (applyNetUnAvail:false, cni:)
I0910 14:24:20.295775       1 controller.go:723] Syncing backends for all LB services.
I0910 14:24:20.295974       1 controller.go:727] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
W0910 14:24:20.298994       1 ibm_metadata_service.go:210] MetadataService: node ibm-powervs-1-control-plane-f2w9k missing 5 label(s): ibm-cloud.kubernetes.io/internal-ip,ibm-cloud.kubernetes.io/worker-id,ibm-cloud.kubernetes.io/machine-type,ibm-cloud.kubernetes.io/zone,ibm-cloud.kubernetes.io/region
I0910 14:24:20.748979       1 controller.go:723] Syncing backends for all LB services.
I0910 14:24:20.749012       1 controller.go:727] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0910 14:24:21.299236       1 ibm_metadata_service.go:222] Retrieving information for node=ibm-powervs-1-control-plane-f2w9k from Power VS
I0910 14:24:24.749940       1 ibm_powervs_client.go:189] instance name: ibm-powervs-1-control-plane-f2w9k id d9a972a5-5279-482c-938c-21c666eeaecd
I0910 14:24:28.951889       1 ibm_powervs_client.go:288] Node ibm-powervs-1-control-plane-f2w9k worker id is d9a972a5-5279-482c-938c-21c666eeaecd
I0910 14:24:28.951918       1 ibm_powervs_client.go:291] Node ibm-powervs-1-control-plane-f2w9k instance type is s922
I0910 14:24:28.951932       1 ibm_powervs_client.go:294] Node ibm-powervs-1-control-plane-f2w9k region is osa
I0910 14:24:28.951945       1 ibm_powervs_client.go:297] Node ibm-powervs-1-control-plane-f2w9k failureDomain is osa21
I0910 14:24:28.951970       1 ibm_powervs_client.go:355] Node ibm-powervs-1-control-plane-f2w9k internal IP is 192.168.169.243
I0910 14:24:28.951991       1 ibm_powervs_client.go:356] Node ibm-powervs-1-control-plane-f2w9k external IP is 163.68.77.243
I0910 14:24:28.952012       1 ibm_metadata_service.go:258] MetadataService: node ibm-powervs-1-control-plane-f2w9k save to cache, metadata: {InternalIP:192.168.169.243 ExternalIP:163.68.77.243 WorkerID:d9a972a5-5279-482c-938c-21c666eeaecd InstanceType:s922 FailureDomain:osa21 Region:osa ProviderID:}
I0910 14:24:28.952047       1 node_controller.go:512] Adding node label from cloud provider: beta.kubernetes.io/instance-type=s922
I0910 14:24:28.952062       1 node_controller.go:513] Adding node label from cloud provider: node.kubernetes.io/instance-type=s922
I0910 14:24:28.952075       1 node_controller.go:524] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=osa21
I0910 14:24:28.952089       1 node_controller.go:525] Adding node label from cloud provider: topology.kubernetes.io/zone=osa21
I0910 14:24:28.952102       1 node_controller.go:535] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=osa
I0910 14:24:28.952116       1 node_controller.go:536] Adding node label from cloud provider: topology.kubernetes.io/region=osa
I0910 14:24:28.959111       1 controller.go:723] Syncing backends for all LB services.
I0910 14:24:28.959138       1 controller.go:727] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0910 14:24:28.965459       1 node_controller.go:474] Successfully initialized node ibm-powervs-1-control-plane-f2w9k with cloud provider
I0910 14:24:28.965624       1 event.go:389] "Event occurred" object="ibm-powervs-1-control-plane-f2w9k" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"


karthikkn@Karthiks-MacBook-Pro cluster-api-provider-ibmcloud % kubectl get daemonset -n kube-system
NAME                                  DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                            AGE
ibmpowervs-cloud-controller-manager   3         3         3       3            3           node-role.kubernetes.io/control-plane=   20m
kube-proxy                            4         4         4       4            4           kubernetes.io/os=linux                   20m


karthikkn@Karthiks-MacBook-Pro cluster-api-provider-ibmcloud % kubectl -n kube-system describe daemonset ibmpowervs-cloud-controller-manager
Name:           ibmpowervs-cloud-controller-manager
Selector:       k8s-app=ibmpowervs-cloud-controller-manager
Node-Selector:  node-role.kubernetes.io/control-plane=
Labels:         k8s-app=ibmpowervs-cloud-controller-manager
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           k8s-app=ibmpowervs-cloud-controller-manager
  Service Account:  cloud-controller-manager
  Containers:
   ibmpowervs-cloud-controller-manager:
    Image:      quay.io/kabhat/ibmccm:v10_09
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=2
      --cloud-provider=ibm
      --cloud-config=/etc/cloud/ibmpowervs.conf
      --use-service-account-credentials=true
    Requests:
      cpu:  200m
    Environment:
      ENABLE_VPC_PUBLIC_ENDPOINT:  true
    Mounts:
      /etc/cloud from ibmpowervs-config-volume (ro)
      /etc/ibm-secret from ibm-secret (rw)
  Volumes:
   ibmpowervs-config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      ibmpowervs-cloud-config
    Optional:  false
   ibm-secret:
    Type:          Secret (a volume populated by a Secret)
    SecretName:    ibmpowervs-cloud-credential
    Optional:      false
  Node-Selectors:  node-role.kubernetes.io/control-plane=
  Tolerations:     node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                   node-role.kubernetes.io/master:NoSchedule op=Exists
                   node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
                   node.kubernetes.io/not-ready:NoSchedule op=Exists
Events:
  Type    Reason            Age    From                  Message
  ----    ------            ----   ----                  -------
  Normal  SuccessfulCreate  21m    daemonset-controller  Created pod: ibmpowervs-cloud-controller-manager-dlvws
  Normal  SuccessfulCreate  13m    daemonset-controller  Created pod: ibmpowervs-cloud-controller-manager-lskzp
  Normal  SuccessfulCreate  7m34s  daemonset-controller  Created pod: ibmpowervs-cloud-controller-manager-n6zqz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant