brash-mechanic-26159
05/06/2022, 4:36 PMbastion.id.apply(lambda x: open_ssh_tunnel_to_bastion(x))
)
4. Create the k8s provider that uses the bastion as a proxy
5. Create a bunch of resources on the cluster
Everything kind of works, but step 3 is very awkward and doesn’t work automatically in every situation. I frequently find myself having to open up the SSH-tunnel manually before running pulumi up
when making changes to deployments and services (step 5).
I think it would be better if I could just modify the lifecycle-hooks of the kuberneters Provider to ensure that the tunnel is always up.
Any suggestions? Maybe there are other approaches that work better here?brave-ambulance-98491
05/06/2022, 5:11 PMapply
having completed?
The obvious possible problem would be the k8s provider is trying to run commands before the tunnel is created, which would be solved by strictly ordering the operations.brash-mechanic-26159
05/06/2022, 5:18 PMapply
function, and then using that as an input to the k8s provider:
ssh_tunnel_port = gke_bastion.name.apply(set_tunnel_target_and_open_ssh)
k8s_info = Output.all(
cluster.name, cluster.endpoint, cluster.master_auth, ssh_tunnel_port
)
k8s_config = k8s_info.apply(
lambda info: _kubeconfig_template.format(
ca_cert=info[2]["cluster_ca_certificate"],
endpoint=info[1],
context="gke_{0}_{1}_{2}".format(project, zone, info[0]),
proxy_url=f"<http://127.0.0.1>:{info[3]}",
)
)
provider = Provider(
"gke_k8s",
kubeconfig=k8s_config,
suppress_helm_hook_warnings=True,
)
brave-ambulance-98491
05/06/2022, 5:24 PMProvider
dependent on the SSH tunnel - but it looks like you're already doing that!