sparse-optician-70334
09/12/2023, 2:01 PMbreezy-caravan-29021
09/12/2023, 2:37 PMsparse-optician-70334
09/12/2023, 2:49 PMfrom pulumi_aws_native import iam, s3
my_bucket = s3.Bucket("my_bucket")
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "{{ role_arn }}"
},
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:PutBucketOwnerControl"
],
"Resource": [
"arn:aws:s3:::{{ bucket_name }}",
"arn:aws:s3:::{{ bucket_name }}/*"
]
}
]
}
1. attach the following policy https://docs.databricks.com/en/administration-guide/account-settings-e2/credentials.html#option-1-default-deployment-policy
(I am already doing this)
2. attach a 2nd inline policy to the pulumi managed role to give S3 access to this particular bucket
I am currently exploring Jinja and the JSON templates - but this is somehow getting stuck in apply/Output[T] hell for me.billowy-army-68599
sparse-optician-70334
09/12/2023, 3:05 PMbillowy-army-68599
sparse-optician-70334
09/12/2023, 3:15 PMbillowy-army-68599
sparse-optician-70334
09/12/2023, 3:22 PMbillowy-army-68599
sparse-optician-70334
09/12/2023, 3:24 PM{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:PutBucketOwnerControl"
],
"Resource": [
"arn:aws:s3:::{{ bucket_name }}",
"arn:aws:s3:::{{ bucket_name }}/*"
]
}
]
}
I still am very curious to learn how it works in a better way.
However, ideally, I can also figure out how:
cannot create mws workspaces: MALFORMED_REQUEST: Failed storage configuration validation checks: List,Put,PutWithBucketOwnerFullControl,Delete
is fixed. I had hoped that feeding the policy would solve this as well.billowy-army-68599
sparse-optician-70334
09/12/2023, 8:00 PMbillowy-army-68599
sparse-optician-70334
09/13/2023, 7:35 AMassume_role_policy = databricks.get_aws_assume_role_policy(external_id=account_id)
is super convenient. In other cases where not available, would you still suggest to not drop back to jinja? how would an approach look like there?<https://accounts.cloud.databricks.com>
is the wrong URL here? I.e. that the workspace would be required instead? But this would not make sense as this is an account-level operationbillowy-army-68599
sparse-optician-70334
09/13/2023, 3:41 PMcross_account_role_policy = databricks.get_aws_cross_account_policy()
cross_account_role_policy_applied = aws.iam.RolePolicy(
"databricks-policy",
role=iam_role.name,
policy=cross_account_role_policy.json,
)
creds = databricks.MwsCredentials(
f"{prefix}-{ascii_env}-db-credentials",
credentials_name=f"{prefix}-{ascii_env}-db-credentials",
account_id=account_id,
role_arn=iam_role.arn,
)
But for me even with:
opts=pulumi.ResourceOptions(
depends_on=[iam_role, cross_account_role_policy_applied]
),
specified this fails with an not yet initialized underlying IAM role.
This seems to be semi-reproducible and depend on race conditions based on how quickly certain resources create.
I find it strange that the dependencies defined here are not honoured.
It is fixed when running pulumi up a 2nd time.billowy-army-68599
sparse-optician-70334
09/15/2023, 4:55 PM_ cross_account_role_policy_applied.id.apply(lamba x: x)
?billowy-army-68599
lambda
you can sleep for 30ssparse-optician-70334
09/15/2023, 5:20 PM