https://pulumi.com logo
#kubernetes
Title
# kubernetes
p

polite-motherboard-78438

05/07/2020, 6:20 PM
hello. I want to use the Postgres provider to manage databases on a Postgres instance deployed on Kubernetes. The issue is that the service is not exposed to the outside so Pulumi can´t reach it. What is the best way to handle this? Should I use a VPN/kubectl port-forwarding? I need a way that can be use in CI envrionment and also for new/existing clusters. (A new cluster will create the K8s cluster, deploy the postgres StatefulSet and setup default databases).
b

billowy-army-68599

05/07/2020, 7:10 PM
I would say using port forward is the best way if you're running inside the cluster. Just set the environment variable
PGHOST
to localhost and it should work
a

ancient-megabyte-79588

05/07/2020, 8:06 PM
I'd explore setting up an IngressController to provide access to the underlying Pod that contains your Postgres container.
b

billowy-army-68599

05/07/2020, 8:37 PM
I'm not sure I'd advocate exposing databases via an ingress!
p

polite-motherboard-78438

05/08/2020, 6:07 PM
yah. that would be the simplest option, but in general is not a good practice for security reasons to expose services like databases to the outside.
i think port forward is the best option but my issue is that if the cluster and the databases are created in the same pulumi run, I cant port-forward before running pulumi because the cluster doesnt exist yet and AFAIK there is no way to run single commands on pulumi after some resource is created? may be I can wrap around some dynamic providers in a similar way to this: https://github.com/pulumi/examples/tree/master/aws-ts-ec2-provisioners
a

ancient-megabyte-79588

05/08/2020, 10:42 PM
I'm not certain what the postgres provider is, but in my cluster, I installed postgres and pgadmin4, leave the db to be accessed directly by all the apps, which included pgAdmin4, and I created an IngressController for pgAdmin4 to do db administration from outside of the cluster
f

faint-motherboard-95438

08/03/2020, 2:19 PM
@polite-motherboard-78438 I’m hitting the same issue here. What did you end up with to fix it on your end ?
a

ancient-megabyte-79588

08/13/2020, 4:35 PM
@faint-motherboard-95438 I would expect that you'd need to expose your postgres db to the outside world if you want CI/CD to do the work, or port-forward to the postgres instance if you want to do the work with a local pulumi app.
f

faint-motherboard-95438

08/14/2020, 10:22 AM
@ancient-megabyte-79588 exposing the DB to the outside world is not an option and a security issue for us. The question is more related on how to do so in one pulumi run (though that seems impossible, maybe someone from the @echoing-match-29901 knows the trick). If not doable, indeed that means to deploy the raw DB service from one pulumi project, and manage it from another one with some “magic” in between to access the DB (as you suggested either a trigger port-forward or open a VPN)
a

ancient-megabyte-79588

08/14/2020, 3:17 PM
@faint-motherboard-95438 In this case, outside world meant outside of the cluster but not necessarily public. If your CI/CD has access to the same internal network as your kubernetes is on, I simple meant exposing your Postgres DB on a NodePort or LoadBalancer. Currently, our Postgres DB is exposed only as a ClusterIP and is only accessible via portforwarding or pods that are internal to the cluster, which means Pulumi cannot access it. I am curious though if the new pulumi kubernetes operator is an option for you. That looks to be like having a little pulumi CLI in your k8s cluster.
f

faint-motherboard-95438

08/20/2020, 8:50 PM
@ancient-megabyte-79588 thanks for clarifying. I ended up provisioning the database in one project and managing resources in another one, using a postgresql provider and spawning a port-forwarding before running the
pulumi
command, maybe not perfect but works both locally and in a pipeline.
👍 1
4 Views