following use case: We have multiple k8s pulumi pr...
# general
g
following use case: We have multiple k8s pulumi programs (think each for a different microservice A,B,C) each in different repositories. All pulumi programs shall deploy to a shared k8s namespace that is named after the environment (e.g. staging, dev1, dev_john_doe) etc. Now the question is where (in which pulumi program) we should create the namespace itself. Is it somehow possible to put the namespace resource definition redundantly in each program (A,B,C) but make pulumi not complain if the namespace already exists? I.e. if I spin up program A, and afterwards B, B shouldn’t complain about the existing namespace. In other words: I want some sort of idempotent / upsert-like / “ensure-existence” operation as part of pulumi. With plain
kubectl apply
of k8s manifests that’s possible. Thoughts?
m
The way that we've done this to date is to put the common bits of the infrastructure (in this case the namespace) in their own stack, then use config +
.get
methods to import these resources as necessary.
I see, though, that the k8s library does not have
.get
methods. This approach is still workable, though, by using the
id
property on
CustomResourceOptions
.
So for a namespace, you might write
Copy code
const myNamespace = new k8s.core.v1.Namespace("ns", {}, { id: "my-namespace" });
(disclaimer: I'm not sure if the k8s requires anything in the second argument for this to work, but it may)
g
@microscopic-florist-22719 I don’t quite understand how this should work. How do I determine the
id
of a resource in another stack?
Is that id property even supposed to refer to resources in another stack?
m
It can, yes. The ID is unique within the scope of the resource provider. In k8s this means in the scope of the targeted cluster.
So one approach would be to export the ID of the namespace from the base stack and then use it as a config var to the consuming stacks
something like
Copy code
const baseNamespace = new k8s.core.v1.Namespace(...);
export const baseNamespaceId = baseNamespace.id;
in the base stack
And then in consuming stacks:
Copy code
const config = new pulumi.Config();
const baseNamespaceId = config.require("baseNamespaceId");
const baseNamespace = new k8s.core.v1.Namespace("baseNamespace", {}, { id: baseNamespaceId });
g
I see
Interesting to know
m
When configuring a consuming stack, you would do something to the effect of
pulumi config set baseNamespaceId $(pulumi stack output baseNamespaceId -s base-stack-name)
g
But it’s unfortunately not quite what I wanted. I wanted to avoid having a base stack just to create a namespace actually
Ideally it’d like each stack to have the capability to create the namespace by itself or reuse the existing namespace if it already exists.
m
Right--there's essentially no way to do this reliably due to races.
In order to make this work, we'd probably need first-class support in the resource model, which we don't have. It's a very interesting use case; we'll have to consider how we might be able to make it work.
g
Races aside - is there a possibility to conditionally create the ns resource (conditioned on whether it exists already in the cluster or not) ?
m
In the meantime, you could use the presence of the config var to determine whether or not to create the namespace
is there a possibility to conditionally create the ns resource (conditioned on whether it exists already in the cluster or not)
Yes, definitely. You'd have to probe the cluster manually, though. @creamy-potato-29402 is there any easy way to check for the existence of a resource in a k8s cluster from JS?
g
the next problem with the conditional creation would be: If I create stack A,B and A happens to be the stack which actually creates the namespace. How can I avoid deleting the namespace (which implicitly would also delete the resources of B) when I destroy stack A.
I guess ideally i’d like to have some sort of unmanaged / create-only resource 😕
m
You could always do this with a shared infra stack and then shell out to
pulumi
to deploy that stack from within each consuming stack 😛
g
Copy code
<http://github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc42055a900|github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc42055a900>, 0xc4202d6f00, 0x2b20080, 0xc4202aadc0, 0xc420554300)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:572 +0x9f
    created by <http://github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1|github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1>
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:570 +0xa1

  kubernetes:core:Namespace: sauron
    error: Plan apply failed: transport is closing

info: no changes required:
      1 resource unchanged
this is my code in stack B:
Copy code
import * as k8s from '@Pulumi Team/kubernetes';


const name = 'sauron';
const namespace = 'solvvy-staging';

const ns = new k8s.core.v1.Namespace(name, { metadata: { name: namespace }}, {id: 'solvvy-staging'} );
update: I tried using a namespace created by stack A in stack B by referring to it’s Id, but I get the following error:
Copy code
➜  node-sauron git:(develop) ✗ pulumi up --skip-preview -y --show-sames
Updating stack 'solvvy/sauron-api-cluster-dev-b'
Performing changes:

     Type                          Name                                  Status                 Info
 *   pulumi:pulumi:Stack           node-sauron-sauron-api-cluster-dev-b  done                   1 info message
 >-  └─ kubernetes:core:Namespace  sauron                                **reading failed**     1 error

Diagnostics:
  pulumi:pulumi:Stack: node-sauron-sauron-api-cluster-dev-b
    info: panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1e51326]
    goroutine 43 [running]:
    <http://github.com/pulumi/pulumi-kubernetes/pkg/provider.checkpointObject(0xc4200b2198|github.com/pulumi/pulumi-kubernetes/pkg/provider.checkpointObject(0xc4200b2198>, 0x0, 0xc4205ea300)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/provider/provider.go:750 +0x26
    <http://github.com/pulumi/pulumi-kubernetes/pkg/provider.(*kubeProvider).Read(0xc42016e150|github.com/pulumi/pulumi-kubernetes/pkg/provider.(*kubeProvider).Read(0xc42016e150>, 0x2b1c7e0, 0xc4203c19e0, 0xc4202c8d20, 0xc42016e150, 0x1, 0x1)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/provider/provider.go:505 +0x511
    <http://github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler.func1(0x2b1c7e0|github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler.func1(0x2b1c7e0>, 0xc4203c19e0, 0x20b23a0, 0xc4202c8d20, 0x2b1c7e0, 0xc4203c19e0, 0x2b221c0, 0x2bb72a0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go/provider.pb.go:1247 +0x86
    <http://github.com/pulumi/pulumi-kubernetes/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x2b1c7e0|github.com/pulumi/pulumi-kubernetes/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x2b1c7e0>, 0xc4203c19e0, 0x20b23a0, 0xc4202c8d20, 0xc42015b940, 0xc42015b980, 0x0, 0x0, 0x2afca60, 0xc4203b4ff0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go:61 +0x326
    <http://github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler(0x20b91c0|github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler(0x20b91c0>, 0xc42016e150, 0x2b1c7e0, 0xc4203c1080, 0xc4202c8c80, 0xc42000b120, 0x0, 0x0, 0xc42007dda0, 0xc42004a180)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go/provider.pb.go:1249 +0x16d
    <http://github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc4202d6f00|github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc4202d6f00>, 0x2b20080, 0xc4202aadc0, 0xc420554300, 0xc4202fda70, 0x2b88f38, 0x0, 0x0, 0x0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:826 +0xab4
    <http://github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4202d6f00|github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4202d6f00>, 0x2b20080, 0xc4202aadc0, 0xc420554300, 0x0)
        /home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:1023 +0x1528
that is the resource entry in stack A:
Copy code
{
                "urn": "urn:pulumi:solvvy-apis-api-cluster-dev-b::solvvy-apis::kubernetes:core/v1:Namespace::solvvy-apis",
                "custom": true,
                "id": "solvvy-staging",
                "type": "kubernetes:core/v1:Namespace",
                "inputs": {
                    "apiVersion": "v1",
                    "kind": "Namespace",
                    "metadata": {
                        "name": "solvvy-staging"
                    }
                },
                "outputs": {
                    "__inputs": {
                        "apiVersion": "v1",
                        "kind": "Namespace",
                        "metadata": {
                            "name": "solvvy-staging"
                        }
                    },
                    "apiVersion": "v1",
                    "kind": "Namespace",
                    "metadata": {
                        "creationTimestamp": "2018-09-21T22:06:04Z",
                        "name": "solvvy-staging",
                        "resourceVersion": "7013667",
                        "selfLink": "/api/v1/namespaces/solvvy-staging",
                        "uid": "884a84f0-bdea-11e8-bfa9-42010a8a010e"
                    },
                    "spec": {
                        "finalizers": [
                            "kubernetes"
                        ]
                    },
                    "status": {
                        "phase": "Active"
                    }
                },
                "parent": "urn:pulumi:solvvy-apis-api-cluster-dev-b::solvvy-apis::pulumi:pulumi:Stack::solvvy-apis-solvvy-apis-api-cluster-dev-b",
                "dependencies": [],
                "initErrors": null,
                "provider": "urn:pulumi:solvvy-apis-api-cluster-dev-b::solvvy-apis::pulumi:providers:kubernetes::default::494ae35c-373a-4c6d-99a2-c5dbbae4d6a3"
            },
note, stack A and B are from different projects (X and Y)
You could always do this with a shared infra stack and then shell out to
pulumi
to deploy that stack from within each consuming stack
That’s an interesting thought. Essentially this is kind of a way of establishing dependencies between stacks. Is there a reasonable way to code this dependency (i.e. a shell provider resource or so) in pulumi code instead of writing some wrapper shell script?