https://pulumi.com logo
b

broad-helmet-79436

11/08/2021, 10:19 AM
Hi there! 👋 I’m trying to import a StatefulSet into my stack, and the differ and/or preview are acting really weird: They’re somehow showing data from the very first version of the StatefulSet, even though the StatefulSet has been edited several times using
kubectl edit
. My StatefulSet is called
prometheus-grafana
, and runs in a namespace called
monitoring
. The Kubernetes cluster runs in GCP on version 1.20.10-gke.1600.
kubectl
tells me a few things (of particular interest, that the
apiVersion
is
apps/v1
):
Copy code
$ kubectl get statefulset --context dev -n monitoring prometheus-grafana -o json
{
    "apiVersion": "apps/v1",
    "kind": "StatefulSet",
    […]
}
when I try to import the resource with Pulumi, however, the preview shows me some really old data:
Copy code
// index.ts
new kubernetes.apps.v1.StatefulSet(
  'prometheus-grafana',
  {},
  { import: 'monitoring/prometheus-grafana', provider: kubernetesProvider }
);
Copy code
$ pulumi preview --stack dev --diff

[…]

 =   ├─ kubernetes:apps/v1:StatefulSet  prometheus-grafana  import     [diff: -spec~apiVersion,metadata]; 1 warni
    = kubernetes:apps/v1:StatefulSet: (import)
        [id=monitoring/prometheus-grafana]
        [urn=urn:pulumi:dev::folio::kubernetes:apps/v1:StatefulSet::prometheus-grafana]

[…]
    [provider=urn:pulumi:dev::folio::pulumi:providers:kubernetes::kubernetes_provider::a326d7af-28d0-4aa9-b5e1-7017e0244985]
      ~ apiVersion: "apps/v1beta2" => "apps/v1"
      […]
This is especially surprising because I’m using
new kubernetes.apps.v1.StatefulSet(…)
, which I believe shouldn’t see anything with
apiVersion
apps/v1beta2
. From the looks of it, the Pulumi preview shows me the very first version of the statefulset that was deployed three years ago: I’ve also modified one of the containers to add an environment variable, and updated the Docker image multiple times using
kubectl edit
. I figured that was really weird, so I decided to log the imported StatefulSet to see if it’s somehow an issue with the diffing and/or preview:
Copy code
const statefulSet = new kubernetes.apps.v1.StatefulSet(
  'prometheus-grafana',
  {},
  { import: 'monitoring/prometheus-grafana', provider: kubernetesProvider }
);

statefulSet.apiVersion.apply(apiVersion =>
  console.log({ apiVersion })
);
Copy code
$ pulumi up --stack dev

[…]

Diagnostics:
  pulumi:pulumi:Stack (folio-dev):
    { apiVersion: 'apps/v1' }
I double-checked that this code is indeed printing the imported resource by logging the
metadata
and containers as well. It sure seems to me like the code is doing what I intended. On a related note, I also see the correct data with this code:
Copy code
const res = kubernetes.apps.v1.StatefulSet.get(
  'p-g',
  'monitoring/prometheus-grafana',
  { provider: kubernetesProvider }
);

res.apiVersion.apply(apiVersion => console.log({ apiVersion }));
If I’m right so far, I believe it means that the imported resource has its
apiVersion
set to
apps/v1
, but the Pulumi resource differ and/or preview for some reason believe they’re working with the Very First Version of the StatefulSet that was deployed in 2018. I know that the Kubernetes API does some caching, particularly on requests to
watch
a resource, by checking the
resourceVersion
property on the objects. I verified that the StatefulSet’s
resourceVersion
gets updated when I change the container spec’s
image
with
kubectl edit
, though. I have no idea if it matters, but the StatefulSet was deployed by Cloud Marketplace (https://console.cloud.google.com/marketplace/product/google/prometheus), which means it has some
ownerReferences
set. Still, I don’t really see that triggering this behaviour, since my diagnostics/logs show that Pulumi has the correct data to work with. I’ve put quite a few hours into debugging this, and now I’m at a complete loss. Am I missing something obvious? 😅
e

elegant-window-55250

11/10/2021, 2:02 PM
@billowy-army-68599 Do you know what this could be?
g

gorgeous-egg-16927

11/10/2021, 6:27 PM
Just to confirm, are you having problems other than the unexpected diff? That behavior looks correct to me, but has some subtleties that I’ll explain below: For the
kubectl get
call, the api server will return the version that you request. The default depends on the cluster version and version of
kubectl
, but in this case, it returns
apps/v1
since you didn’t provide the fully-qualified groupVersion. If you instead used
kubectl get statefulsets.v1beta2.apps
, the server would return the following JSON:
Copy code
{
    "apiVersion": "apps/v1beta2",
    "kind": "StatefulSet",
    […]
}
There’s some more documentation here, but this behavior surprises a lot of users (myself included when I first started working on the provider): https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get Since the server will return whatever version you ask for, it’s not always possible to tell what version was used to create the resource. Unfortunately, some old resources versions have subtle differences that impact the await logic we use to determine readiness. To solve this, we store the version used to create the resource in that state, and use that to determine which logic to run. All that said, each resource type should have aliases defined so that Pulumi should update the state without having to update/replace the actual resource on the cluster; only the Pulumi state changes.
b

broad-helmet-79436

11/24/2021, 1:50 PM
Thanks for your reply @gorgeous-egg-16927! I read it a while ago and decided I needed to have a think about it – and I finally have. I was under the impression that using the “namespace”(?) of the specific version (i.e.,
kubernetes.apps.v1.StatefulSet
as opposed to
kubernetes.apps.v1beta2.StatefulSet
) would explicitly ask the API server for that version. My assumption was reinforced by the fact that TypeScript doesn’t allow anything except
apps/v1
as the
apiVersion
field if I use the
kubernetes.apps.v1.…
Resource class. (Question 1) Is it true that I can not (and should not) expect
kubernetes.apps.v1.StatefulSet
to explicitly ask for the
apps/v1
version of a resource when I pass the
import
option? (Question 2) I’ve never imported this resource into my stack before, so there aren’t any traces of which version was used to create it. Is Pulumi somehow fetching that first, and then fetching whatever the API server returns for that particular API group? And finally:
All that said, each resource type should have aliases defined so that Pulumi should update the state without having to update/replace the actual resource on the cluster; only the Pulumi state changes.
(Question 3) Do you mean that the Kubernetes provider should do automatically provide these aliases, or am I expected to provide them myself? (Question 4) As it stands, is there anything I can do in my code to tell the StatefulSet resource to import the correct version? Can I, say, prefix the resource name in the
import
field? The GCP provider docs have a section on importing each resource (e.g., https://www.pulumi.com/registry/packages/gcp/api-docs/container/cluster/#import), but I can’t find anything similar in the Kubernetes provider docs.
g

gorgeous-egg-16927

12/01/2021, 4:51 PM
(1) I’m not 100% sure about the import case off the top of my head. For a normal create, the provider will use the version you specify with the constructor. So for the
apps.v1.StatefulSet
example you mention, the provider will explicitly use that version in the client calls. I’d expect import to do the same, but I haven’t dug into that particular case before. (2) If the version isn’t present in the state, I’d expect the provider to use the default version returned by the api server, which will depend on your version of k8s. (3) Yes, they are automatically applied. You can see them in the resource definitions for the SDK. Here’s an example: https://github.com/pulumi/pulumi-kubernetes/blob/b33dc6255633c7e708eef1630d14d5a949dd0c44/sdk/nodejs/apps/v1/statefulSet.ts#L250 (4) I think you might need to import the resource using the older SDK version, so
kubernetes.apps.v1beta2.StatefulSet
rather than with the the
apps.v1
SDK. Once the resource is imported, you should be able to update your code to use the
apps.v1
SDK, and the alias should allow you to update without touching the actual k8s resource (only updating pulumi state). Let me know if that helps.
b

broad-helmet-79436

12/02/2021, 2:51 PM
I finally figured it out! I think. Thank you, @gorgeous-egg-16927! 😄 If I’m understanding it correctly, the issue stems from a combination of two not-really-issues: 1. Pulumi uses the
<http://kubectl.kubernetes.io/last-applied-configuration|kubectl.kubernetes.io/last-applied-configuration>
annotation to determine what’s going to change, and that annotation says the last applied apiVersion was
apps/v1beta2
. 2. Kubernetes 1.16 and up (I’m running 1.22) no longer serve
apps/v1beta2
resources, so there’s absolutely no way for Pulumi to look up the previous version, even if it’s trying to do it. Under these circumstances (and possibly always?), it looks like the
last-applied-configuration
annotation is used as the source of truth for what is currently applied. That’s not really correct – but it’s not Pulumi’s fault. It looks like
kubectl edit
does not update the
last-applied-configuration
annotation unless you pass
--save-config
, which was a surprise to me, but well reasoned in https://github.com/kubernetes/kubernetes/issues/40626.
All in all – I didn’t properly understand Kubernetes/kubectl itself or how the Pulumi k8s provider does its diffing, so I was surprised by the output. I stumbled upon this issue https://github.com/pulumi/pulumi-kubernetes/issues/1659 which looks kinda-but-not-entirely related. Either way, I take it Pulumi uses the
last-applied-configuration
annotation to be able to do its diffing without talking to the Kubernetes cluster, so it all makes sense now.
For posterity, I got around it by a combination of running
kubectl edit --save-config
and making an insignificant change (I added a label to the
.spec.template.labels
), and deleted the entire
status
field that gke adds to the config. Importing then worked 🙂 Thanks again!
🎉 1
g

gorgeous-egg-16927

12/02/2021, 4:13 PM
That sounds right to me. Thanks for the update, and glad you were able to get it sorted out!
b

broad-helmet-79436

12/10/2021, 2:48 PM
just a related note: I just had an issue with an ingress.v1beta1.networking.k8s.io that I tried migrating to ingress.v1.networking.k8s.io. I believe k8s handles the transition by automatically translating the config of the older version to a resource of the new version, and it serves the new version by default when you ask for ingresses without specifying a version. so when I updated my code to match the new spec (very minor changes) and changed the apiVersion, I get some weird diff
… and it does weird things when I try to apply as well. it looks to me like Pulumi tries to apply my changes to the old version of the resource that I’m migrating away from
the fix here was again to just
kubectl edit ingress my-ingress --save-config
, delete the entire
last-applied-configuration
annotation, and save
then I applied again, with no issues
but it’s a bit unexpected (to me at least – and it seems Pulumi too) that there exists a resource of the new type with
last-applied-configuration
whose content matches the old resource spec
I’m not sure if this is a bug or how I would even describe it in a github issue, so I just decided to say it here 😬
g

gorgeous-egg-16927

12/10/2021, 4:31 PM
Hmm, I’d guess that could have been because you changed the apiVersion and some of the spec at the same time, but I’m not 100% sure. Running a
pulumi refresh
might bring things back in line as well. This does appear to be a side effect of the way we’re doing client-side diffing with the
last-applied-configuration
annotation, so that problem should go away once we’ve completed the migration to server-side apply.
b

broad-helmet-79436

12/13/2021, 8:51 AM
I think you’re right that it’s because I’m both changing the resource and the api version at the same time – but the two api versions are incompatible, so there’s no way around it. I think the best approach for now is to just run
kubectl edit <the-resource> --save-config
and delete the last-applied-configuration annotation when migrating, since apparently Kubernetes handles deprecations by translating the old resource (and any changes made to it) into a resource of the new version – which makes almost no sense to me (or to
@pulumi/kubernetes)
, because the last-applied-configuration won’t (and can’t) match 🤷 either way, I’m gonna say case closed for my part & I’m looking forward to the server-side diffing/apply 🙂
👍 2
4 Views