sparse-caravan-37954
03/08/2023, 1:59 PMuser_data
, but it doesn't seem to work.
# Create an EC2 instance
instance = aws.ec2.Instance(
config.INSTANCE_NAME,
instance_type=config.INSTANCE_TYPE,
ami=config.AMI,
key_name=config.KEY_NAME,
vpc_security_group_ids=[security_group.id],
user_data=f"""#!/bin/bash
yum update -y
amazon-linux-extras install docker
service docker start
usermod -a -G docker ec2-user
chkconfig docker on
docker login --username AWS --password $(aws ecr get-login-password --region {config.AWS_REGION})
docker pull {image.image_uri}
docker run -d -p 80:80 {image.image_uri} > /hello.txt"""
)
If I ssh into the instance and run those commands, I get an error with docker login --username AWS --password $(aws ecr get-login-password --region {config.AWS_REGION})
namely Unable to locate credentials. You can configure credentials by running "aws configure".
How make it working?astonishing-dentist-11149
03/08/2023, 7:19 PMastonishing-exabyte-93491
03/08/2023, 7:46 PMaws.eks.Cluster
and aws.eks.NodeGroup
resource providers. I’ve done this in the past on GCP, so the strategy was pretty straight-forward:
1. Provide a new NodePool with autoscalling
enabled.
2. Cordon, then drain nodes using kubectl
.
3. Decommission the nodepool with an old kubernetes version, which equates here to remove the resource from the pulumi stack.
Again, my goal is to prevent worker nodes from being unreachable while upgrading to a more recent version.
I’m operating under the following assumptions:
1. control plane upgrades do not compromise data plane workloads in any way, with the exception that the api server will not be reachable for a few minutes.
2. There can not be more than two minor versions delta between master and worker nodes.
Your insight will be very much appreciated.
Many thanks,rough-jordan-15935
03/09/2023, 2:55 AMenough-fountain-96877
03/09/2023, 3:05 PMboundless-van-74484
03/09/2023, 7:45 PMlifecycleRules
property within aws.s3.Bucket
Hey everyone, I’m trying to create a BucketLifecycleConfiguration
resource for an existing S3 bucket and I’m running into this error when the lifecycle config resource is attempting to create
aws:s3control:BucketLifecycleConfiguration (lifecycle-config):
error: 1 error occurred:
* error parsing S3 Control Bucket ARN (): unknown format
Here’s how I am creating the resources:
const bucket = new aws.s3.Bucket('bucket-name', {
acl: 'private',
corsRules: [
{
allowedHeaders: ['Content-Type', 'Content-Disposition'],
allowedMethods: ['PUT', 'GET', 'HEAD'],
allowedOrigins: [`https://${config.require('webPublicDomain')}`],
exposeHeaders: [
'Content-Type',
'Content-Disposition',
'x-amz-id-2',
'x-amz-request-id',
'x-amz-server-side-encryption',
'ETag',
],
maxAgeSeconds: 3000,
},
],
serverSideEncryptionConfiguration: {
rule: {
applyServerSideEncryptionByDefault: {
sseAlgorithm: 'AES256',
},
},
},
versioning: {
enabled: true,
},
});
new aws.s3.BucketPublicAccessBlock('access-block', {
bucket: bucket.id,
ignorePublicAcls: true,
restrictPublicBuckets: true,
blockPublicAcls: true,
blockPublicPolicy: true,
});
new aws.s3control.BucketLifecycleConfiguration('lifecycle-config', {
bucket: bucket.arn,
rules: [
{
id: 'my-rule',
abortIncompleteMultipartUpload: {
daysAfterInitiation: 2,
},
},
],
});
The only thing I’ve added is the aws.s3control.BucketLifecycleConfiguration
resource. When I run pulumi preview --diff
locally, the correct bucket ARN for the lifecycle config resource is shown and there are no errors. When I run pulumi up --yes
in CI, I get that error.
Both local and CI versions are 3.57.1. Any help is appreciated!most-mouse-38002
03/10/2023, 8:48 AMaction:*
, but I also can’t help every single developer with what ever minor action they need added to their runner (it also pollutes our Pulumi Service history with a bunch of failures). I know I have asked this before, but I would love to get some input from real world experience. Do people actually list out every specific action in each GitHub Action, or do they use wildcards?polite-umbrella-11196
03/11/2023, 10:58 PMaws:ecs:Service
?fresh-spring-82225
03/12/2023, 1:10 AMastonishing-exabyte-93491
03/13/2023, 5:10 PMdry-journalist-60579
03/13/2023, 10:12 PMdev
, staging
, and prod
stacks each in a separate account. Do I need to set up the OIDC provider and role to assume in every account? Or can I have one and somehow use cross-account role assuming?famous-jelly-72366
03/14/2023, 9:42 AMJSON.toString({...})
:S (for TypeScript)most-state-94104
03/14/2023, 1:24 PMable-hospital-16256
03/14/2023, 3:55 PMbright-eye-26627
03/15/2023, 10:22 AMengineVersion
to 14 and tried to run pulumi preview
, but I get this very strange error:
panic: fatal: An assertion has failed: Expected diff to not require deletion or replacement during Update of <urn redacted>
This worked in another environment, so I'm not sure what is causing this. Anyone seen this before?tall-lion-84030
03/15/2023, 11:03 AMFargateService
with a taskDefinitionArgs
and gave it a taskRole
and executionRole
with an Role containing the right permissions to retrieve secrets (‘secretsmanager:GetSecretValue’, ‘kms:Decrypt’ and ‘ssm:GetParameters’).
But i’m stuck with an error:
RessourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve secrets from ssm: service call has been retried 1 times): Invalld ssm parameters
I was unaware that secrets manager uses ssm store parameter under the hood since i have not set any parameter store. Does anyone has any idea to help me figure this ? Thanks in advance 🙏bumpy-laptop-30846
03/15/2023, 2:13 PMquaint-jelly-14306
03/15/2023, 4:30 PMstraight-yacht-47796
03/15/2023, 5:14 PMname
attribute but I still keep getting a DB identifier nameXXXXXXX
snippet of code
const rds = new aws.rds.Instance(dbName, {
name: dbName,
engine: 'mysql',
username: mysqlUser,
password: mysqlPassword,
...});
bored-branch-92019
03/16/2023, 4:16 PMnginx:latest
(for example) + add a ALB as well just to make sure that this works and I can reach it at a public IP. However I am running into the following issue and not really sure what the issue is
error: Error: invocation of aws:ec2/getVpc:getVpc returned an error: invoking aws:ec2/getVpc:getVpc: 1 error occurred:
* no matching EC2 VPC found
at Object.callback (/snapshot/awsx/node_modules/@pulumi/pulumi/runtime/invoke.js:148:33)
at Object.onReceiveStatus (/snapshot/awsx/node_modules/@grpc/grpc-js/src/client.ts:338:26)
at Object.onReceiveStatus (/snapshot/awsx/node_modules/@grpc/grpc-js/src/client-interceptors.ts:426:34)
at Object.onReceiveStatus (/snapshot/awsx/node_modules/@grpc/grpc-js/src/client-interceptors.ts:389:48)
at /snapshot/awsx/node_modules/@grpc/grpc-js/src/call-stream.ts:276:24
at processTicksAndRejections (node:internal/process/task_queues:78:11)
error: Error: failed to register new resource pulumi-service [awsx:ecs:FargateService]: 2 UNKNOWN: invocation of aws:ec2/getVpc:getVpc returned an error: invoking aws:ec2/getVpc:getVpc: 1 error occurred:
* no matching EC2 VPC found
brief-gold-99163
03/16/2023, 5:15 PMpulumi up -r
on a stack with:
export const directory = new aws.directoryservice.Directory(
'credijusto',
{
edition: 'Standard',
name: '<http://ds.credijusto.info|ds.credijusto.info>',
password: '',
size: 'Small',
type: 'MicrosoftAD',
vpcSettings: {
subnetIds: ['subnet-0d8a88b0bcacb298c', 'subnet-0a10d5aad6b1ada7c'],
vpcId: vpc.id,
},
tags: config.tags,
},
{
protect: true,
},
);
I receive:
Diagnostics:
aws:directoryservice:Directory (credijusto):
error: aws:directoryservice/directory:Directory resource 'credijusto' has a problem: Missing required argument: The argument "password" is required, but no definition was found.. Examine values at 'Directory.Password'.
So, at first I have realised that the password in the new version of the aws npm library can't be empty. So, I put the password that Admin user has on the AD. But Pulumi try to recreate the resource, and I don't want to.
I think that there is a bug, because if I put 'password' on ignoreChanges Pulumi said that password is mandatory.
So at this time, i think that I have no solution.brief-gold-99163
03/16/2023, 5:16 PMcareful-family-14644
03/17/2023, 8:09 PMcrooked-sunset-90921
03/20/2023, 1:38 AMawsx.ecs.Cluster
green-whale-1001
03/20/2023, 2:46 PMstraight-arm-50771
03/20/2023, 2:54 PMcode: Cannot assign type 'string' to type 'archive'
from below:
service:
type: aws:lambda:Function
properties:
role: ${lambda-role.arn}
code: s3_bucket
memorySize: 512
runtime: "go1.x"
s3Bucket: mybucket
s3Key: service.zip
timeout: 15
environment:
variables:
foo: bar
Edit- solved. Documentation is misleading. code
should not be used for type s3_bucket
rough-jewelry-40643
03/21/2023, 12:25 PMelegant-activity-51782
03/21/2023, 1:08 PMdry-journalist-60579
03/21/2023, 9:47 PMComponentResource
. When I rerun the stack, it doesn’t recognize it as an update and instead tries to recreate the resource. It fails because `creating IAM Role (AWSControlTowerAdmin): EntityAlreadyExists: Role with name AWSControlTowerAdmin already exists`… any thoughts on the correct approach to this?helpful-receptionist-73337
03/21/2023, 9:47 PMsecret_string
param i am using an apply
function to transform the Output. The issue i am having is that event though the secret is dependent on the private link, the apply
runs before the privatelink finishes causing my build to error out. Does anyone know how to make the apply wait until a resource is finished being created?
Here is a mock code snippet to help with context:
aws_endpoint_service = aws.ec2.VpcEndpoint(
f"{environment}-aws-vpc-endpoint",
vpc_id=vpc_configs["vpc_id"],
service_name=mongo_private_link_endpoint.endpoint_service_name,
vpc_endpoint_type="Interface",
subnet_ids=vpc_configs["subnet_ids"],
security_group_ids=vpc_configs["security_group_ids"],
)
mongo_private_link_endpoint_service = mongodbatlas.PrivateLinkEndpointService(
f"{environment}-PrivateLinkEndpointService",
project_id=mongo_private_link_endpoint.project_id,
private_link_id=mongo_private_link_endpoint.private_link_id,
endpoint_service_id=aws_endpoint_service.id,
provider_name="AWS",
)
cluster = mongodbatlas.get_cluster(
name=mongo_configs["cluster_name"], project_id=mongo_configs["project_id"]
)
secret = aws.secretsmanager.Secret(
resource_name=f"{environment}_mongo_connections",
name=f"{environment}_mongo_connections",
opts=ResourceOptions(delete_before_replace=True),
recovery_window_in_days=0,
)
secret_version = aws.secretsmanager.SecretVersion(
resource_name=f"{environment}_mongo_connections",
secret_id=secret.arn,
# the id does not exist until the endpoint service is created
# but the apply runs before it finishes causing a failure
secret_string=cluster.connection_strings.apply(
lambda x: json.dumps({"private_connection": f"{x[0].get(mongo_private_link_endpoint_service.id).split('://')[1]}"}
)
),
opts=ResourceOptions(depends_on=[mongo_private_link_endpoint_service]),
)