broad-helmet-79436
11/08/2021, 10:19 AMkubectl edit
.
My StatefulSet is called prometheus-grafana
, and runs in a namespace called monitoring
.
The Kubernetes cluster runs in GCP on version 1.20.10-gke.1600.
kubectl
tells me a few things (of particular interest, that the apiVersion
is apps/v1
):
$ kubectl get statefulset --context dev -n monitoring prometheus-grafana -o json
{
"apiVersion": "apps/v1",
"kind": "StatefulSet",
[ā¦]
}
when I try to import the resource with Pulumi, however, the preview shows me some really old data:
// index.ts
new kubernetes.apps.v1.StatefulSet(
'prometheus-grafana',
{},
{ import: 'monitoring/prometheus-grafana', provider: kubernetesProvider }
);
$ pulumi preview --stack dev --diff
[ā¦]
= āā kubernetes:apps/v1:StatefulSet prometheus-grafana import [diff: -spec~apiVersion,metadata]; 1 warni
= kubernetes:apps/v1:StatefulSet: (import)
[id=monitoring/prometheus-grafana]
[urn=urn:pulumi:dev::folio::kubernetes:apps/v1:StatefulSet::prometheus-grafana]
[ā¦]
[provider=urn:pulumi:dev::folio::pulumi:providers:kubernetes::kubernetes_provider::a326d7af-28d0-4aa9-b5e1-7017e0244985]
~ apiVersion: "apps/v1beta2" => "apps/v1"
[ā¦]
This is especially surprising because Iām using new kubernetes.apps.v1.StatefulSet(ā¦)
, which I believe shouldnāt see anything with apiVersion
apps/v1beta2
.
From the looks of it, the Pulumi preview shows me the very first version of the statefulset that was deployed three years ago: Iāve also modified one of the containers to add an environment variable, and updated the Docker image multiple times using kubectl edit
.
I figured that was really weird, so I decided to log the imported StatefulSet to see if itās somehow an issue with the diffing and/or preview:
const statefulSet = new kubernetes.apps.v1.StatefulSet(
'prometheus-grafana',
{},
{ import: 'monitoring/prometheus-grafana', provider: kubernetesProvider }
);
statefulSet.apiVersion.apply(apiVersion =>
console.log({ apiVersion })
);
$ pulumi up --stack dev
[ā¦]
Diagnostics:
pulumi:pulumi:Stack (folio-dev):
{ apiVersion: 'apps/v1' }
I double-checked that this code is indeed printing the imported resource by logging the metadata
and containers as well. It sure seems to me like the code is doing what I intended.
On a related note, I also see the correct data with this code:
const res = kubernetes.apps.v1.StatefulSet.get(
'p-g',
'monitoring/prometheus-grafana',
{ provider: kubernetesProvider }
);
res.apiVersion.apply(apiVersion => console.log({ apiVersion }));
If Iām right so far, I believe it means that the imported resource has its apiVersion
set to apps/v1
, but the Pulumi resource differ and/or preview for some reason believe theyāre working with the Very First Version of the StatefulSet that was deployed in 2018.
I know that the Kubernetes API does some caching, particularly on requests to watch
a resource, by checking the resourceVersion
property on the objects. I verified that the StatefulSetās resourceVersion
gets updated when I change the container specās image
with kubectl edit
, though.
I have no idea if it matters, but the StatefulSet was deployed by Cloud Marketplace (https://console.cloud.google.com/marketplace/product/google/prometheus), which means it has some ownerReferences
set. Still, I donāt really see that triggering this behaviour, since my diagnostics/logs show that Pulumi has the correct data to work with.
Iāve put quite a few hours into debugging this, and now Iām at a complete loss. Am I missing something obvious? š
elegant-window-55250
11/10/2021, 2:02 PMgorgeous-egg-16927
11/10/2021, 6:27 PMkubectl get
call, the api server will return the version that you request. The default depends on the cluster version and version of kubectl
, but in this case, it returns apps/v1
since you didnāt provide the fully-qualified groupVersion. If you instead used kubectl get statefulsets.v1beta2.apps
, the server would return the following JSON:
{
"apiVersion": "apps/v1beta2",
"kind": "StatefulSet",
[ā¦]
}
Thereās some more documentation here, but this behavior surprises a lot of users (myself included when I first started working on the provider): https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
Since the server will return whatever version you ask for, itās not always possible to tell what version was used to create the resource. Unfortunately, some old resources versions have subtle differences that impact the await logic we use to determine readiness. To solve this, we store the version used to create the resource in that state, and use that to determine which logic to run.
All that said, each resource type should have aliases defined so that Pulumi should update the state without having to update/replace the actual resource on the cluster; only the Pulumi state changes.broad-helmet-79436
11/24/2021, 1:50 PMkubernetes.apps.v1.StatefulSet
as opposed to kubernetes.apps.v1beta2.StatefulSet
) would explicitly ask the API server for that version. My assumption was reinforced by the fact that TypeScript doesnāt allow anything except apps/v1
as the apiVersion
field if I use the kubernetes.apps.v1.ā¦
Resource class.
(Question 1) Is it true that I can not (and should not) expect kubernetes.apps.v1.StatefulSet
to explicitly ask for the apps/v1
version of a resource when I pass the import
option?
(Question 2) Iāve never imported this resource into my stack before, so there arenāt any traces of which version was used to create it. Is Pulumi somehow fetching that first, and then fetching whatever the API server returns for that particular API group?
And finally:
All that said, each resource type should have aliases defined so that Pulumi should update the state without having to update/replace the actual resource on the cluster; only the Pulumi state changes.(Question 3) Do you mean that the Kubernetes provider should do automatically provide these aliases, or am I expected to provide them myself? (Question 4) As it stands, is there anything I can do in my code to tell the StatefulSet resource to import the correct version? Can I, say, prefix the resource name in the
import
field? The GCP provider docs have a section on importing each resource (e.g., https://www.pulumi.com/registry/packages/gcp/api-docs/container/cluster/#import), but I canāt find anything similar in the Kubernetes provider docs.gorgeous-egg-16927
12/01/2021, 4:51 PMapps.v1.StatefulSet
example you mention, the provider will explicitly use that version in the client calls. Iād expect import to do the same, but I havenāt dug into that particular case before.
(2) If the version isnāt present in the state, Iād expect the provider to use the default version returned by the api server, which will depend on your version of k8s.
(3) Yes, they are automatically applied. You can see them in the resource definitions for the SDK. Hereās an example: https://github.com/pulumi/pulumi-kubernetes/blob/b33dc6255633c7e708eef1630d14d5a949dd0c44/sdk/nodejs/apps/v1/statefulSet.ts#L250
(4) I think you might need to import the resource using the older SDK version, so kubernetes.apps.v1beta2.StatefulSet
rather than with the the apps.v1
SDK. Once the resource is imported, you should be able to update your code to use the apps.v1
SDK, and the alias should allow you to update without touching the actual k8s resource (only updating pulumi state).
Let me know if that helps.broad-helmet-79436
12/02/2021, 2:51 PM<http://kubectl.kubernetes.io/last-applied-configuration|kubectl.kubernetes.io/last-applied-configuration>
annotation to determine whatās going to change, and that annotation says the last applied apiVersion was apps/v1beta2
.
2. Kubernetes 1.16 and up (Iām running 1.22) no longer serve apps/v1beta2
resources, so thereās absolutely no way for Pulumi to look up the previous version, even if itās trying to do it.
Under these circumstances (and possibly always?), it looks like the last-applied-configuration
annotation is used as the source of truth for what is currently applied. Thatās not really correct āĀ but itās not Pulumiās fault.
It looks like kubectl edit
does not update the last-applied-configuration
annotation unless you pass --save-config
, which was a surprise to me, but well reasoned in https://github.com/kubernetes/kubernetes/issues/40626.last-applied-configuration
annotation to be able to do its diffing without talking to the Kubernetes cluster, so it all makes sense now.kubectl edit --save-config
and making an insignificant change (I added a label to the .spec.template.labels
), and deleted the entire status
field that gke adds to the config. Importing then worked š Thanks again!gorgeous-egg-16927
12/02/2021, 4:13 PMbroad-helmet-79436
12/10/2021, 2:48 PMkubectl edit ingress my-ingress --save-config
, delete the entire last-applied-configuration
annotation, and savelast-applied-configuration
whose content matches the old resource specgorgeous-egg-16927
12/10/2021, 4:31 PMpulumi refresh
might bring things back in line as well. This does appear to be a side effect of the way weāre doing client-side diffing with the last-applied-configuration
annotation, so that problem should go away once weāve completed the migration to server-side apply.broad-helmet-79436
12/13/2021, 8:51 AMkubectl edit <the-resource> --save-config
and delete the last-applied-configuration annotation when migrating, since apparently Kubernetes handles deprecations by translating the old resource (and any changes made to it) into a resource of the new version āĀ which makes almost no sense to me (or to @pulumi/kubernetes)
, because the last-applied-configuration wonāt (and canāt) match 𤷠either way, Iām gonna say case closed for my part & Iām looking forward to the server-side diffing/apply š