This message was deleted.
# general
s
This message was deleted.
l
Best I can think of is we declare a custom resource that basically uses
aws modify-db-proxy
under the hood; with some locking in place this would seem to be a viable solution? So: • On create, take the lock • Get the current proxy state • Modify to extend auth with the new secret • Release lock Does Pulumi have any primitives for cross-stack locking?
a
I'm not certain about cross-stack locking, but this is definitely an interesting problem. I can understand the desire to define your secret on each new microservice to avoid making changes to your shared infra code, but if you are still mutating that shared infra stack with each new service you introduce, you sort of lose some of the benefits. What is the business requirement for having each micro service define its own DB access credentials? Security? The only other thing that occurred to me when looking at the resource docs, is the possibility of doing something clever with iamAuth. If you are optimizing for security, you could use iamAuth to generate short-lived access tokens for each of your microservices. With this sort of setup, each of your microservices could define the resources to attach the appropriate IAM role to generate auth tokens for DB access
l
Thanks for the reply! The idea, which may be terrible, is that you want spinning up a new service to be simple and self-contained. So e.g. if you have a monorepo with
service-a
,
service-b
, etc. in it, each of those directories has subdirectories like
src
,
test
, and now,
infra
. The
infra
subdirectory has the Pulumi microstack for that. The
Pulumi.<stack>.yaml
has the secrets for the deployment of the service, which might include things like
database: true
, etc. Then we have some shared Pulumi library that means your
index.ts
(assuming TS/Node, but I think this applies to any language) is just
const env = somethingUsingStackReferencesToGetMainEnvAwsAccountIds(); new MyCompanyApi(name, { env })
. That then goes and creates a Fargate service in the same account, and uses PG providers to set up user credentials, etc. which it injects into the environment variables of the service for you, etc. All in all, Pulumi lets us package the whole app up without engineers needing to know how it's done, which is awesome. The shared auth configuration for e.g. RDS Proxy here is a thorn in this idea, if that makes sense.
a
Yeah the setup makes sense to me and is pretty similar to what we are doing. We essentially have opinionated templates for creating things like fargate services, where everything unique to a given service is parameterized within the
Pulumi.<stack>.yaml
file for that service. In the case of many services needing DB connectivity, you may consider defining either 1. a secret in Secrets Manager or 2. and IAM role for accessing the DB using short-lived credentials. In each case, the resources would be defined in your common infra repo. Then, you can imagine each microservice doing something like the following:
Copy code
import {MyCompanyApi} from "@my_org/service_templates";

const sharedInfraStack = new pulumi.StackReference("common-infra-stack");
const config = new pulumi.Config();

new MyCompanyApi(name, { env, sharedInfraStack });
Now, your service template repo will be responsible for ensuring that the appropriate envvars are available on your container env, based on
config
in
Pulumi.<stack>.yaml
for the microservice. So if you have something like
database: true
in your config, your template will know that it either needs to add your DB secret to your fargate task, or attach the appropriate IAM role to your fargate task, depending on whether you choose option 1 or 2. The secret/iam role values can both be retrieved from your common infra stack via the stack reference that you provided. I hope this makes sense
If you have a hard requirement for each microservice to manage its own PG credentials, path 1 is much more challenging bc you can't reuse a secret defined in your common infra stack. it might make more sense to go the IAM route and use shortlived creds in that case
l
Thanks for the thoughts Paul -- this makes a lot of sense! I will see what we come up with 🙂
🙌 1