https://pulumi.com logo
#general
Title
# general
s

some-doctor-62800

08/28/2019, 7:14 PM
@white-balloon-205 would it be possible to craft a dynamic resource provider that decorates an existing provider?
w

white-balloon-205

08/28/2019, 7:35 PM
What exactly do you mean by "decorates an existing provider"?
s

some-doctor-62800

08/28/2019, 7:41 PM
I'm currently building something that I don't agree with, but I don't see any other way at this moment
Our k8s master api is only accessible from our vpc network
So I have: - stack 1: builds the vpc, bastion instances and the cluster. - some shell script that spins up an ssh tunnel to the bastion via outputs of stack 1 and starts stack 2 - stack 2: setup k8s provider via stackreference of stack 1 which then succeeds to connect because of the tunnel etcetera
I'd really love if things like this can be done within pulumi as a provider decorator
@white-balloon-205 does it make sense?
w

white-balloon-205

08/28/2019, 8:22 PM
I see. Yes - this scenario makes sense - we wanted to do the same for testing in https://github.com/pulumi/pulumi-eks/pull/154#issuecomment-502847936. I'm not sure there is any easy way today to do this. The providers currently can only run on the same machine as the
pulumi
CLI, and you need the Kubernetes provider to be running on the bastion host, which means you need the
pulumi
CLI running on the bastion host. I suspect some sort of VPN is the best near-term way to automate this - and you could technically set that up with existing Pulumi + a dynamic provider I assume - though it's likely not "simple". Else - a dynamic provider that wraps `pulumi up`/`pulumi destroy` but invokes them via ssh into some remote host may technically be an option, though I've never tried anything quite like that myself.
s

some-doctor-62800

08/28/2019, 8:25 PM
no not really, I do a port forward + a dns entry in /etc/hosts for kubernetes. Iff I need to connect to more clusters at once in the future I'll bake in extra names into the SAN of the CA cert
If I can actually run things like ssh inside that provider before handing off control to the k8s provider (how? it's a native provider 'somewhere') that'd be awesome
It's also probably never going to happen that we'll run pulumi from the bastion
It'll run from CD (in our case azure pipelines) where very permissive gcloud credentials are stored
w

white-balloon-205

08/28/2019, 8:31 PM
I do a port forward + a dns entry in /etc/hosts for kubernetes
If you are doing this - then what exactly is the problem you have? Don't you then have external access to the cluster via that port forwarding?
s

some-doctor-62800

08/28/2019, 8:31 PM
potentially VPN would make sense in the future for us, though bastions are easier to audit
@white-balloon-205 yes over the tunnel, which means the api server isn't actually connected to the public. our bastion hosts are easier to monitor.
Don't you then have external access to the cluster via that port forwarding
same with VPN of course, that's also some form of external access
Ah did we get our wires crossed? Port forward in context of ssh would be say
ssh -L 8443:#kubeapi-ip#:443
in pulumi I then bake a kubeconfig for it to point to kubernetes:8443 instead
It's all a bit roundabout but if there'd be good support during a pulumi up run to launch a vpn after a vpc created, or open tunnels per provider through bastions that'd be amazing
When we have the bandwidth to do this well we'll probably start using agent pools https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-linux?view=azure-devops We'll run those in their own VPC. Any new VPCs would then request peering with that agent pool VPC after which the agents running pulumi up can access those ranges.