glamorous-printer-66548
09/21/2018, 5:13 AMkubectl apply
of k8s manifests that’s possible.
Thoughts?microscopic-florist-22719
.get
methods to import these resources as necessary..get
methods. This approach is still workable, though, by using the id
property on CustomResourceOptions
.const myNamespace = new k8s.core.v1.Namespace("ns", {}, { id: "my-namespace" });
glamorous-printer-66548
09/21/2018, 10:02 PMid
of a resource in another stack?microscopic-florist-22719
const baseNamespace = new k8s.core.v1.Namespace(...);
export const baseNamespaceId = baseNamespace.id;
in the base stackconst config = new pulumi.Config();
const baseNamespaceId = config.require("baseNamespaceId");
const baseNamespace = new k8s.core.v1.Namespace("baseNamespace", {}, { id: baseNamespaceId });
glamorous-printer-66548
09/21/2018, 10:06 PMmicroscopic-florist-22719
pulumi config set baseNamespaceId $(pulumi stack output baseNamespaceId -s base-stack-name)
glamorous-printer-66548
09/21/2018, 10:07 PMmicroscopic-florist-22719
glamorous-printer-66548
09/21/2018, 10:09 PMmicroscopic-florist-22719
is there a possibility to conditionally create the ns resource (conditioned on whether it exists already in the cluster or not)Yes, definitely. You'd have to probe the cluster manually, though. @creamy-potato-29402 is there any easy way to check for the existence of a resource in a k8s cluster from JS?
glamorous-printer-66548
09/21/2018, 10:15 PMmicroscopic-florist-22719
pulumi
to deploy that stack from within each consuming stack 😛glamorous-printer-66548
09/21/2018, 10:22 PM<http://github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc42055a900|github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc42055a900>, 0xc4202d6f00, 0x2b20080, 0xc4202aadc0, 0xc420554300)
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:572 +0x9f
created by <http://github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1|github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1>
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:570 +0xa1
kubernetes:core:Namespace: sauron
error: Plan apply failed: transport is closing
info: no changes required:
1 resource unchanged
this is my code in stack B:
import * as k8s from '@Pulumi Team/kubernetes';
const name = 'sauron';
const namespace = 'solvvy-staging';
const ns = new k8s.core.v1.Namespace(name, { metadata: { name: namespace }}, {id: 'solvvy-staging'} );
➜ node-sauron git:(develop) ✗ pulumi up --skip-preview -y --show-sames
Updating stack 'solvvy/sauron-api-cluster-dev-b'
Performing changes:
Type Name Status Info
* pulumi:pulumi:Stack node-sauron-sauron-api-cluster-dev-b done 1 info message
>- └─ kubernetes:core:Namespace sauron **reading failed** 1 error
Diagnostics:
pulumi:pulumi:Stack: node-sauron-sauron-api-cluster-dev-b
info: panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1e51326]
goroutine 43 [running]:
<http://github.com/pulumi/pulumi-kubernetes/pkg/provider.checkpointObject(0xc4200b2198|github.com/pulumi/pulumi-kubernetes/pkg/provider.checkpointObject(0xc4200b2198>, 0x0, 0xc4205ea300)
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/provider/provider.go:750 +0x26
<http://github.com/pulumi/pulumi-kubernetes/pkg/provider.(*kubeProvider).Read(0xc42016e150|github.com/pulumi/pulumi-kubernetes/pkg/provider.(*kubeProvider).Read(0xc42016e150>, 0x2b1c7e0, 0xc4203c19e0, 0xc4202c8d20, 0xc42016e150, 0x1, 0x1)
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/pkg/provider/provider.go:505 +0x511
<http://github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler.func1(0x2b1c7e0|github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler.func1(0x2b1c7e0>, 0xc4203c19e0, 0x20b23a0, 0xc4202c8d20, 0x2b1c7e0, 0xc4203c19e0, 0x2b221c0, 0x2bb72a0)
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go/provider.pb.go:1247 +0x86
<http://github.com/pulumi/pulumi-kubernetes/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x2b1c7e0|github.com/pulumi/pulumi-kubernetes/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc.OpenTracingServerInterceptor.func1(0x2b1c7e0>, 0xc4203c19e0, 0x20b23a0, 0xc4202c8d20, 0xc42015b940, 0xc42015b980, 0x0, 0x0, 0x2afca60, 0xc4203b4ff0)
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc/server.go:61 +0x326
<http://github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler(0x20b91c0|github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go._ResourceProvider_Read_Handler(0x20b91c0>, 0xc42016e150, 0x2b1c7e0, 0xc4203c1080, 0xc4202c8c80, 0xc42000b120, 0x0, 0x0, 0xc42007dda0, 0xc42004a180)
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/github.com/pulumi/pulumi/sdk/proto/go/provider.pb.go:1249 +0x16d
<http://github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc4202d6f00|github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc4202d6f00>, 0x2b20080, 0xc4202aadc0, 0xc420554300, 0xc4202fda70, 0x2b88f38, 0x0, 0x0, 0x0)
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:826 +0xab4
<http://github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4202d6f00|github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc.(*Server).handleStream(0xc4202d6f00>, 0x2b20080, 0xc4202aadc0, 0xc420554300, 0x0)
/home/travis/gopath/src/github.com/pulumi/pulumi-kubernetes/vendor/google.golang.org/grpc/server.go:1023 +0x1528
{
"urn": "urn:pulumi:solvvy-apis-api-cluster-dev-b::solvvy-apis::kubernetes:core/v1:Namespace::solvvy-apis",
"custom": true,
"id": "solvvy-staging",
"type": "kubernetes:core/v1:Namespace",
"inputs": {
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "solvvy-staging"
}
},
"outputs": {
"__inputs": {
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "solvvy-staging"
}
},
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"creationTimestamp": "2018-09-21T22:06:04Z",
"name": "solvvy-staging",
"resourceVersion": "7013667",
"selfLink": "/api/v1/namespaces/solvvy-staging",
"uid": "884a84f0-bdea-11e8-bfa9-42010a8a010e"
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"phase": "Active"
}
},
"parent": "urn:pulumi:solvvy-apis-api-cluster-dev-b::solvvy-apis::pulumi:pulumi:Stack::solvvy-apis-solvvy-apis-api-cluster-dev-b",
"dependencies": [],
"initErrors": null,
"provider": "urn:pulumi:solvvy-apis-api-cluster-dev-b::solvvy-apis::pulumi:providers:kubernetes::default::494ae35c-373a-4c6d-99a2-c5dbbae4d6a3"
},
You could always do this with a shared infra stack and then shell out toThat’s an interesting thought. Essentially this is kind of a way of establishing dependencies between stacks. Is there a reasonable way to code this dependency (i.e. a shell provider resource or so) in pulumi code instead of writing some wrapper shell script?to deploy that stack from within each consuming stackpulumi