Hi everyone, I am trying to write a microservice w...
# getting-started
c
Hi everyone, I am trying to write a microservice which will do provision infrastructure, want to store states in aws s3 bucket. Is there a way to initialize automation class using hardcoded aws credentials, I mean without aws configure ar without setting
Copy code
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
Copy code
from pulumi import automation as auto


def program():
    pass


auto.create_stack(
    stack_name="Dev",
    project_name="MyfirstProject",
    program=program,
    opts=auto.LocalWorkspaceOptions(
        project_settings=auto.ProjectSettings(
            name="Test",
            runtime="python",
            backend=auto.ProjectBackend(url="<s3://test-bucket>")
        )
    )
)
a
Hey! I am afraid that quick answer is that you should not do that) I would suggest that you run you script with proper env, even from your laptop. Like this:
Copy code
AWS_ACCESS_KEY_ID=12312 AWS_SECRET_ACCESS_KEY=1231 AWS_SESSION_TOKEN=132 pulumi up
This works just fine, will sit neatly in your bash history, and, most importantly, save you from leaking creds into code.
c
Mine is like temporary credential to to provision infrastructure, and code has to be inline so I am not using pulumi up shell command
a
As for microservice, which does provision, I've thought about that (I am on my way to IaaC), and most convenient that I see is to use CI/CD (GitLab in my case) runner as such a service, and store credentials in CI/CD Secrets, secured properly.
c
even if i do i cant block my container with one single creds
i need a way to run multiple jobs on different aws account in same container
a
And yes, I beileve, there is
stack.set_config()
for that case, it should perfectly handle
aws:secretKey
and other secrets, as denoted here. Thus the stack and provider thereof should recieve secrets and work properly.
c
🙌
thanks man!
let me have a look!
d
You can use aws assume role for avoiding Aws credentials
b
Copy code
pulumi config set aws:accessKey <something> --secret
pulumi config set aws:secretKet <something> --secret
c
I cant even assume role its already a assume role creds I get to provision infra
b
if you’ve assumed role, you should have some temporary creds, you just need to set the environment variables
a
pulumi config set
for CLI is
stack.set_config()
for automation 😉
https://www.pulumi.com/docs/reference/pkg/python/pulumi/#automation-api-1 Be careful, for that automation seems to create stacks on disk, and could leak these secrets there. It's better to use
secret=True
, I believe, while having also some general passphrase configured.
c
Yep
b
Be careful, for that automation seems to create stacks on disk,
The stacks are created in your backened
and could leak these secrets there
If you use
secret=True
it’s encrypted
one thing you’re almost certainly going to run into though: troubleshooting automation API with an OSS backend is painful. The service will be a dramatic improvement here
c
OSS backend?
b
Copy code
auto.ProjectBackend(url="<s3://test-bucket>")
the S3 backend
c
Yep thats true Its quiet painful
b
the pulumi service has a generous free tier and is free forever for individual accounts 🙂
it also offers secret encryption without having to configure a password 🙂
c
It the only tool of its own kind (changed IAAC completely).
I tried exact below codez
Copy code
from pulumi import automation as auto

creds = {
    "aws_access_key_id": sts["AccessKeyId"],
    "aws_secret_access_key": sts["SecretAccessKey"],
    "aws_region": "us-east-2",
    "aws_session_token": sts["SessionToken"],
    # "pulumi_config_passphrase": "pass_p",
}


def program():
    pass


stack = auto.create_stack(
    stack_name="Dev",
    project_name="MyfirstProject",
    program=program,
    opts=auto.LocalWorkspaceOptions(
        project_settings=auto.ProjectSettings(
            name="Test",
            runtime="python",
            backend=auto.ProjectBackend(url="<s3://test-bucket/SampleStack>")
        )
    )
)

stack.set_config("aws:accessKey", auto.ConfigValue(value=creds["aws_access_key_id"]))
stack.set_config("aws:secretKey", auto.ConfigValue(value=creds["aws_secret_access_key"]))
stack.set_config("aws:token", auto.ConfigValue(value=creds["aws_session_token"]))
stack.set_config("aws:region", auto.ConfigValue(value="us-east-2"))

stack.refresh(on_output=print)
stack.up(on_output=print, color="always")
but getting
Copy code
error: unable to check if bucket <s3://test-bucket/SampleStack> is accessible: blob (code=Unknown): NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors
b
the credentials that write to the bucket are different to the credentials that interacrt with the provider
you’ll need to set them as environment variables