did 1.2.0 break upgrading via helm again? (e.g. up...
# general
b
did 1.2.0 break upgrading via helm again? (e.g. upgrading the nginx ingress controller)
c
@bitter-dentist-28132 not that we know of. do you have more details to share? also, when did it break before?
b
see #796, 793. now when trying to upgrade the chart, it says
deployments.apps "nginxingresscontroller-nginx-ingress-controller" already exists
c
@bitter-dentist-28132 let me see if I can reproduce the issue, though I would note that those aren’t helm-specific, and this one might not be either.
@bitter-dentist-28132 do you happen to have the code handy?
b
just adding it generically
Copy code
const nginxIngressController = new k8s.helm.v2.Chart('nginxingresscontroller', {
        chart: 'nginx-ingress',
        namespace: 'default',
        repo: 'stable',
        values: {
            controller: {
                metrics: {
                    service: {
                        omitClusterIP: true,
                    },
                },
                service: {
                    omitClusterIP: true,
                },
            },
            defaultBackend: {
                service: {
                    omitClusterIP: true,
                },
            },
        },
    }, {
        providers: {
            kubernetes: provider,
        },
    });
what changes is the
metadata.labels.chart
c
in the values?
there doesn’t seem to be a value with that path in the values
b
sorry, what i meant is that it's usually only
metadata.labels.chart
that changes for the k8s objects, since most of the chart dev doesn't affect "standard" deployments. though it seems this time it's doing a full destroy/create of the deployments.
c
do you know what you changed to instigate this?
or were you upgrading the chart version
b
upgrading the chart version. looks like the first one that failed was
"nginx-ingress-1.21.0" => "nginx-ingress-1.23.0"
c
trying now
b
looks like that changed to the new
apiVersion
for deployments
c
oh
that’s probably it
b
i guess pulumi won't order things properly for that kind of operation?
c
we have a PR in flight for this, but it will take some time to land:
basically, what’s happening is, if you submit a resource that differs only in
apiVersion
, kubernetes considers these the same resource, but pulumi does not
we are working to fix ASAP
in the meantime, you can manually set the
apiVersion
to what it was before using the `transforms`—sorry about this.
b
if the delete is done before the create, that shouldn't matter though, yes? i guess all deletes are done after all creates?
c
if you don’t specify a
.metadata.name
then yes
we will create and then delete
if you do (as ~all charts do) then we will be foreced to delete first
in this case, we don’t know we need to delete these first, so we try to create then delete and we fail
another option is to simply delete that resource,
pulumi refresh
, and then
up
b
yes indeed. or pin.
c
Anyway, apologies—I’ll try to get a fix in over the next day. I need to verify this works for all corner cases though.
So it might take a bit longer.
b
that's ok, i do feel like simply saying old === new could cause problems so i can see why you'd be hesitant to just accept it.