https://pulumi.com logo
#general
Title
# general
b

billowy-garage-68819

03/28/2019, 3:16 PM
is there a known issue with ancestry and pulumi delete? In my stack I create a network, a kubernetes cluster, and several helm/kubernetes deployments (all addons/controllers), parentage is network -> kubernetes cluster -> node pools -> addons, yet when I call delete the cluster/network were delete first
or for that matter is the parent option not good enough by itself? i.e. do I need to add depends on as well?
c

creamy-potato-29402

03/28/2019, 6:41 PM
@billowy-garage-68819 what do you mean “call delete”
b

billowy-garage-68819

03/28/2019, 6:42 PM
s/delete/destroy i.e. pulumi destroy
c

creamy-potato-29402

03/28/2019, 6:42 PM
it should be deleted in reverse order.
what did the error look like?
b

billowy-garage-68819

03/28/2019, 6:42 PM
it deleted the cluster first and then tried to delete the resulting k8s objects after
the order was decided imperatively rather than using the dependency graph
c

creamy-potato-29402

03/28/2019, 6:43 PM
was there an error?
b

billowy-garage-68819

03/28/2019, 6:44 PM
I had made a mistake and added import calls at the beginning of the file, but shouldn't pulumi work on this respecting the parent/depends on tags?
yeah, I wound up having to delete the stack and recreate it
c

creamy-potato-29402

03/28/2019, 6:44 PM
What did the error look like?
b

billowy-garage-68819

03/28/2019, 6:44 PM
shell history doesn't go back that far, basically the fact that it couldn't dial the k8s cluster
c

creamy-potato-29402

03/28/2019, 6:45 PM
that should definitely not happen, can you reproduce it?
b

billowy-garage-68819

03/28/2019, 6:46 PM
yeah, if I revert my code and set up another stack, this is the third time I've run into this problem (hence why I added the dependsOn and parent: opts
c

creamy-potato-29402

03/28/2019, 6:46 PM
ok, can you paste the code?
b

billowy-garage-68819

03/28/2019, 6:49 PM
I don't think so but I think I can come up with a reproduction that should reproduce the erorr
error*
c

creamy-potato-29402

03/28/2019, 6:50 PM
ping me when you do and I’ll try to get at the problem.
b

billowy-garage-68819

03/28/2019, 6:50 PM
the other problem I've come across as well, even with the k8s provider being set (and bound to kubeconfig) it's still reading ~/.kube/config
so I'll see if I can repro both outside of this stack
c

creamy-potato-29402

03/28/2019, 6:50 PM
a common thing is that if you’re passing it into a component resource, you need to set the provider differently.
so like
ConfigFile
requires
{providers: {kubernetes: k8sProvider}}
e.g.
b

billowy-garage-68819

03/28/2019, 7:04 PM
alright sec
already ran into the reverse of this problem
the graph/dependsOn isn't handled correctly due to the import preceeding the cluster
and finally, it's using my local kubeconfig because the k8sProvider one either doesn't exist or is just outright ignored
I'm aware of the import issue now
but I still think it should disregard the options when set and continue on
s/should/shouldn't/
c

creamy-potato-29402

03/28/2019, 7:41 PM
@billowy-garage-68819 taking a look now
b

billowy-garage-68819

03/28/2019, 7:41 PM
if it makes a difference, you can consider this a support call, we do actually have a paid org
c

creamy-potato-29402

03/28/2019, 7:44 PM
@billowy-garage-68819 at first glance it looks like
k8sProvider
and
k8sCluster
in
addon_externaldns.ts
are
undefined
.
you can tell because when you run
pulumi up
, they don’t appear as children in the resource summary.
b

billowy-garage-68819

03/28/2019, 7:45 PM
that may be an artifact of me making the repro script
import {k8sProvider, corePool} from "./index"; import {externalDnsVersion} from "./config";
that's out of the actual
where I depend on the pool rather than the cluster
(I also do some things around binding the addons to the specific pool)
and again, I see how the order of imports can affect that, I've resolved that by moving the addon imports to the end of my index.ts
but the failure being silent bothers me significantly
c

creamy-potato-29402

03/28/2019, 7:46 PM
@billowy-garage-68819 ok so what’s happening here, I think, is: 1. You import
addon_externaldns.ts
. 2. nodejs goes and starts executing that file. 3. From that file you import
index.ts
. 4. You reference
k8sProvider
and
k8sCluster
, but because we have interrupted executing
index.ts
to import
addon_externaldns.ts
, those values are
undefined
5. Therefore no providers or parent values are set. 🙂
b

billowy-garage-68819

03/28/2019, 7:47 PM
ugh good stuff, well I never was a js/ts developer
c

creamy-potato-29402

03/28/2019, 7:47 PM
fixing now…
me neither lol
b

billowy-garage-68819

03/28/2019, 7:48 PM
and like I said, I've moved the import
but it bothers me, this shouldn't be a silent failure
c

creamy-potato-29402

03/28/2019, 7:48 PM
nodejs is not a “sane” environment
b

billowy-garage-68819

03/28/2019, 7:48 PM
fair description
c

creamy-potato-29402

03/28/2019, 7:48 PM
but no worries, we’ll get it fixed.
i mean your code.
b

billowy-garage-68819

03/28/2019, 7:49 PM
I wouldn't sweat my code too much, but that being said
I have two asks for pulumi state that resulted in me having to delete the sack
c

creamy-potato-29402

03/28/2019, 7:49 PM
(btw, no sweat on the support stuff, this one is on the house)
b

billowy-garage-68819

03/28/2019, 7:50 PM
1. pulumi state ls (I had to pull the urn's from the web app) 2. pulumi state rm should cascade downwards 😐
and 3. give cameron some hell and ask him about Grid
👋 1
c

creamy-potato-29402

03/28/2019, 7:51 PM
lol cc @gentle-diamond-70147
testing fix now…
tell me more about (1) and (2)
@billowy-garage-68819 btw the following code should work:
(still testing)
Copy code
import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
import * as k8s from "@pulumi/kubernetes";
// Addons
import { project } from "@pulumi/gcp/config";
import { externalDnsVersion } from "./config";
import { addKSNamespace } from "./helper";

import { name } from "./config";

export const k8sCluster = new gcp.container.Cluster(
    "gke-cluster",
    {
        name: name,
        loggingService: "<http://logging.googleapis.com/kubernetes|logging.googleapis.com/kubernetes>",
        monitoringService: "<http://monitoring.googleapis.com/kubernetes|monitoring.googleapis.com/kubernetes>",
        addonsConfig: {
            httpLoadBalancing: {
                disabled: false,
            },
        },
        nodePools: [{ name: "default-pool" }],
    },
    {
        deleteBeforeReplace: true,
    },
);

// Manufacture a GKE-style kubeconfig. Note that this is slightly "different"
// because of the way GKE requires gcloud to be in the picture for cluster
// authentication (rather than using the client cert/key directly).
export const kubeconfig = pulumi
    .all([k8sCluster.name, k8sCluster.endpoint, k8sCluster.masterAuth])
    .apply(([name, endpoint, masterAuth]) => {
        const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;
        return `apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${masterAuth.clusterCaCertificate}
    server: https://${endpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`;
    });

export const k8sProvider = new k8s.Provider(
    "gkeK8s",
    {
        kubeconfig: kubeconfig,
    },
    {
        dependsOn: k8sCluster,
        parent: k8sCluster,
    },
);

// Export the Cluster name
export const clusterName = k8sCluster.name;

const externaldns = new k8s.helm.v2.Chart(
    "externaldns1",
    {
        repo: "stable",
        version: externalDnsVersion,
        chart: "external-dns",
        namespace: "kube-system",
        transformations: [addKSNamespace],
        values: {
            google: {
                project: project,
            },
            policy: "sync",
            provider: "google",
            rbac: {
                create: true,
            },
            tolerations: [],
        },
    },
    {
        providers: { kubernetes: k8sProvider },
        parent: k8sCluster,
        dependsOn: k8sCluster,
    },
);
basically just put externaldns at the bottom
b

billowy-garage-68819

03/28/2019, 7:54 PM
1 - it's a pain to have to pull the urn's from the website, I'm a systems engineer and spend 50% of my life in bash and 50% in an ide
yep, that's what we had before, so same as what I wound up doing (moving the import statement)
the files are split up because it's turning into a rather large project
the downside of splitting it into diff. specs is that the the "addons" are just customizations to the underlying resource so there's not really a good way to organize it, hence I started splitting into files
c

creamy-potato-29402

03/28/2019, 7:55 PM
@billowy-garage-68819 makes sense, in general my advice with node (having been bitten many many times) is to not do circular imports
If you can avoid that, you’ll probably never run into this probllem again.
b

billowy-garage-68819

03/28/2019, 7:55 PM
yeah, I would have preferred to do this in python shakes fist at lack of helm support
c

creamy-potato-29402

03/28/2019, 7:55 PM
soon!
should land in a couple weeks
got it so (1) is just about getting the URNs?
and what about “cascading” rm?
telll me more about that.
b

billowy-garage-68819

03/28/2019, 7:56 PM
think of it as I'd like to have some parity to terraform state ls/rm commands
2. is that 1 helm chart creates multiple "children" as you guys render the chart, if I'm deleting the parent object from the state it would make sense that the child objects are gone as well
so whether by default or by flag, a delete on an element of state should cascade down to child objects
c

creamy-potato-29402

03/28/2019, 7:57 PM
ah
good point.
I’ll take this feedback to the team
b

billowy-garage-68819

03/28/2019, 7:58 PM
also good call yesterday on my helm ticket
although, I did search for that issue on that repo (I think the ticket you referenced was on the pulumi proper repo)
c

creamy-potato-29402

03/28/2019, 8:00 PM
it is “still our fault” and we will definitely fix it… just lots of stuff on our plate
b

billowy-garage-68819

03/28/2019, 8:00 PM
hey man, if creating new systems was easy everybody'd be doing it 😉