is there a known issue with ancestry and pulumi de...
# general
b
is there a known issue with ancestry and pulumi delete? In my stack I create a network, a kubernetes cluster, and several helm/kubernetes deployments (all addons/controllers), parentage is network -> kubernetes cluster -> node pools -> addons, yet when I call delete the cluster/network were delete first
or for that matter is the parent option not good enough by itself? i.e. do I need to add depends on as well?
c
@billowy-garage-68819 what do you mean “call delete”
b
s/delete/destroy i.e. pulumi destroy
c
it should be deleted in reverse order.
what did the error look like?
b
it deleted the cluster first and then tried to delete the resulting k8s objects after
the order was decided imperatively rather than using the dependency graph
c
was there an error?
b
I had made a mistake and added import calls at the beginning of the file, but shouldn't pulumi work on this respecting the parent/depends on tags?
yeah, I wound up having to delete the stack and recreate it
c
What did the error look like?
b
shell history doesn't go back that far, basically the fact that it couldn't dial the k8s cluster
c
that should definitely not happen, can you reproduce it?
b
yeah, if I revert my code and set up another stack, this is the third time I've run into this problem (hence why I added the dependsOn and parent: opts
c
ok, can you paste the code?
b
I don't think so but I think I can come up with a reproduction that should reproduce the erorr
error*
c
ping me when you do and I’ll try to get at the problem.
b
the other problem I've come across as well, even with the k8s provider being set (and bound to kubeconfig) it's still reading ~/.kube/config
so I'll see if I can repro both outside of this stack
c
a common thing is that if you’re passing it into a component resource, you need to set the provider differently.
so like
ConfigFile
requires
{providers: {kubernetes: k8sProvider}}
e.g.
b
alright sec
already ran into the reverse of this problem
the graph/dependsOn isn't handled correctly due to the import preceeding the cluster
and finally, it's using my local kubeconfig because the k8sProvider one either doesn't exist or is just outright ignored
I'm aware of the import issue now
but I still think it should disregard the options when set and continue on
s/should/shouldn't/
c
@billowy-garage-68819 taking a look now
b
if it makes a difference, you can consider this a support call, we do actually have a paid org
c
@billowy-garage-68819 at first glance it looks like
k8sProvider
and
k8sCluster
in
addon_externaldns.ts
are
undefined
.
you can tell because when you run
pulumi up
, they don’t appear as children in the resource summary.
b
that may be an artifact of me making the repro script
import {k8sProvider, corePool} from "./index"; import {externalDnsVersion} from "./config";
that's out of the actual
where I depend on the pool rather than the cluster
(I also do some things around binding the addons to the specific pool)
and again, I see how the order of imports can affect that, I've resolved that by moving the addon imports to the end of my index.ts
but the failure being silent bothers me significantly
c
@billowy-garage-68819 ok so what’s happening here, I think, is: 1. You import
addon_externaldns.ts
. 2. nodejs goes and starts executing that file. 3. From that file you import
index.ts
. 4. You reference
k8sProvider
and
k8sCluster
, but because we have interrupted executing
index.ts
to import
addon_externaldns.ts
, those values are
undefined
5. Therefore no providers or parent values are set. 🙂
b
ugh good stuff, well I never was a js/ts developer
c
fixing now…
me neither lol
b
and like I said, I've moved the import
but it bothers me, this shouldn't be a silent failure
c
nodejs is not a “sane” environment
b
fair description
c
but no worries, we’ll get it fixed.
i mean your code.
b
I wouldn't sweat my code too much, but that being said
I have two asks for pulumi state that resulted in me having to delete the sack
c
(btw, no sweat on the support stuff, this one is on the house)
b
1. pulumi state ls (I had to pull the urn's from the web app) 2. pulumi state rm should cascade downwards 😐
and 3. give cameron some hell and ask him about Grid
👋 1
c
lol cc @gentle-diamond-70147
testing fix now…
tell me more about (1) and (2)
@billowy-garage-68819 btw the following code should work:
(still testing)
Copy code
import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
import * as k8s from "@pulumi/kubernetes";
// Addons
import { project } from "@pulumi/gcp/config";
import { externalDnsVersion } from "./config";
import { addKSNamespace } from "./helper";

import { name } from "./config";

export const k8sCluster = new gcp.container.Cluster(
    "gke-cluster",
    {
        name: name,
        loggingService: "<http://logging.googleapis.com/kubernetes|logging.googleapis.com/kubernetes>",
        monitoringService: "<http://monitoring.googleapis.com/kubernetes|monitoring.googleapis.com/kubernetes>",
        addonsConfig: {
            httpLoadBalancing: {
                disabled: false,
            },
        },
        nodePools: [{ name: "default-pool" }],
    },
    {
        deleteBeforeReplace: true,
    },
);

// Manufacture a GKE-style kubeconfig. Note that this is slightly "different"
// because of the way GKE requires gcloud to be in the picture for cluster
// authentication (rather than using the client cert/key directly).
export const kubeconfig = pulumi
    .all([k8sCluster.name, k8sCluster.endpoint, k8sCluster.masterAuth])
    .apply(([name, endpoint, masterAuth]) => {
        const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;
        return `apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${masterAuth.clusterCaCertificate}
    server: https://${endpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`;
    });

export const k8sProvider = new k8s.Provider(
    "gkeK8s",
    {
        kubeconfig: kubeconfig,
    },
    {
        dependsOn: k8sCluster,
        parent: k8sCluster,
    },
);

// Export the Cluster name
export const clusterName = k8sCluster.name;

const externaldns = new k8s.helm.v2.Chart(
    "externaldns1",
    {
        repo: "stable",
        version: externalDnsVersion,
        chart: "external-dns",
        namespace: "kube-system",
        transformations: [addKSNamespace],
        values: {
            google: {
                project: project,
            },
            policy: "sync",
            provider: "google",
            rbac: {
                create: true,
            },
            tolerations: [],
        },
    },
    {
        providers: { kubernetes: k8sProvider },
        parent: k8sCluster,
        dependsOn: k8sCluster,
    },
);
basically just put externaldns at the bottom
b
1 - it's a pain to have to pull the urn's from the website, I'm a systems engineer and spend 50% of my life in bash and 50% in an ide
yep, that's what we had before, so same as what I wound up doing (moving the import statement)
the files are split up because it's turning into a rather large project
the downside of splitting it into diff. specs is that the the "addons" are just customizations to the underlying resource so there's not really a good way to organize it, hence I started splitting into files
c
@billowy-garage-68819 makes sense, in general my advice with node (having been bitten many many times) is to not do circular imports
If you can avoid that, you’ll probably never run into this probllem again.
b
yeah, I would have preferred to do this in python shakes fist at lack of helm support
c
soon!
should land in a couple weeks
got it so (1) is just about getting the URNs?
and what about “cascading” rm?
telll me more about that.
b
think of it as I'd like to have some parity to terraform state ls/rm commands
2. is that 1 helm chart creates multiple "children" as you guys render the chart, if I'm deleting the parent object from the state it would make sense that the child objects are gone as well
so whether by default or by flag, a delete on an element of state should cascade down to child objects
c
ah
good point.
I’ll take this feedback to the team
b
also good call yesterday on my helm ticket
although, I did search for that issue on that repo (I think the ticket you referenced was on the pulumi proper repo)
c
it is “still our fault” and we will definitely fix it… just lots of stuff on our plate
b
hey man, if creating new systems was easy everybody'd be doing it 😉