https://pulumi.com logo
Title
c

creamy-window-21036

09/29/2022, 6:11 PM
I am using inline python code to create eks cluster and I also have s3 backend configured this way 👇
project_settings=auto.ProjectSettings(
    name=project_name,
    runtime="python",
    backend={"url": "<s3://bucket-in-different-region-or-other-account>"})
stack = auto.create_or_select_stack(stack_name=stack_name,
                                    project_name=project_name,
                                    program=pulumi_program,
                                    opts=auto.LocalWorkspaceOptions(project_settings=project_settings,
                                                                    secrets_provider=secrets_provider}))
I am using pulumi_eks to provision a cluster
import pulumi_eks as eks
eks.Cluster(...)
Is there a way to to pass seperate credential to both of the context? I mean separate creds for pulumi login and separate creds to provision EKS cluster
@echoing-dinner-19531
Will you be able to guide me through this?
e

echoing-dinner-19531

09/29/2022, 7:27 PM
So the s3 state login will use the environmental aws auth (normally AWS_PROFILE), while the eks creation will default to that but you can also set it explicitly in stack configuration so you probably just want to make sure to use
pulumi config set aws:region
and
aws:profile
c

creamy-window-21036

10/02/2022, 12:58 AM
import pulumi
import pulumi_eks as eks
from pulumi import automation as auto
import pulumi_aws as aws


backend_credentials = {
    "AWS_ACCESS_KEY_ID": "abc",
    "AWS_SECRET_ACCESS_KEY": "pqr",
    "AWS_REGION": "us-east-1",
    "AWS_SESSION_TOKEN": "some-session-token",
    "PULUMI_CONFIG_PASSPHRASE": ""
}

stack_credential = {
    "region": "us-west-2",
    "access_key": "def",
    "secret_key": "stu",
    "token": "some-session-token-again"
}


def pulumi_program():
    provider = aws.Provider(
        "eks_provider",
        aws.ProviderArgs(
            **stack_credential
        )
    )

    cluster = eks.Cluster(
        'eks-cluster',
        vpc_id="some-vpc",
        public_subnet_ids=["subnet-1", "subnet-2"],
        public_access_cidrs=['0.0.0.0/0'],
        desired_capacity=2,
        min_size=2,
        max_size=2,
        instance_type='t3.micro',
        storage_classes={"gp2": eks.StorageClassArgs(
            type='gp2', allow_volume_expansion=True, default=True, encrypted=True)},
        opts=pulumi.ResourceOptions(provider=provider)
    )
    pulumi.export("kubeconfig", cluster.kubeconfig)


stack = auto.create_stack(
            stack_name="TestStack",
            project_name="TestProject",
            program=pulumi_program(),
            opts=auto.LocalWorkspaceOptions(
                project_settings=auto.ProjectSettings(
                    name="TestProject",
                    runtime="python",
                    backend=auto.ProjectBackend(
                        url="<s3://some-bucket>"
                    ),
                ),
                env_vars=backend_credentials
                ),
            )

stack.refresh(on_output=print)
stack.up(on_output=print, color="always")
do this approach looks good?
I tried the above approach and getting error even though I am creating a fresh stack stderr: error: the stack is currently locked by 1 lock(s). Either wait for the other process(es) to end or delete the lock file with
pulumi cancel
.
s3://some-bucket/TestPulumiPOC/.pulumi/locks/Dev/f4bb019a-95a7-4359-ba09-7f3e21fe5533.json : created by amrish@my_book (pid 61199) at 2022-10-01T23:03:05-04:00
e

echoing-dinner-19531

10/02/2022, 7:40 AM
Stack locks are by name for the filestate,
pulumi cancel
should delete it. The automation code looks like what I would expect.
🙌 1