broad-dog-22463
05/26/2022, 6:51 PMexamples
folder. We really welcome any feedback šacoustic-spring-42110
05/27/2022, 4:05 PMstraight-laptop-81153
05/27/2022, 5:34 PMacoustic-spring-42110
05/27/2022, 5:35 PMvpc
and wrap it like this (typescript):
const vpc = new awsx.ec2.Vpc("vpc", {vpc: infraRef.getOutput("vpc") as any})
acoustic-spring-42110
05/27/2022, 5:35 PMstraight-laptop-81153
05/27/2022, 5:41 PMstrong-helmet-83704
05/27/2022, 8:38 PMpulumi-aws-5.5.0
I started getting this error during venv initialization
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: <https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates>
After rolling back to pulumi-aws-5.4.0
in my project requirements[.]txt it goes away.crooked-microphone-43180
05/28/2022, 5:19 PMnode_modules
folder. One function might use one package while another doesn't but function serialization doesn't discriminate what packages are being used. This means that they all grow in size as more packages are included in the stack/project. Has anyone ran across this issue? The compressed zip can't be over 50MB and when uncompressed, over 250MB. I know you can instead utilize a container to run the Lambda function, but how would you call aws functions inside of it (like s3.putObject
) without just importing the aws-cli and executing it?few-easter-31331
05/30/2022, 7:45 PMcurved-kitchen-24115
05/31/2022, 10:27 PMlimited-rain-96205
06/01/2022, 12:20 AMwitty-helmet-38026
06/01/2022, 8:54 AM{
āwidgetsā: [
{
ātypeā: ālogā,
āxā: 0,
āyā: 0,
āwidthā: 24,
āheightā: 17,
āpropertiesā: {
āqueryā: āSOURCE ā/aws/lambda/log-group-name | filter LogLevel = \āError\ā | limit 10",
āregionā: āeu-west-1",
āstackedā: false,
ātitleā: āErrorsā,
āviewā: ātableā
}
}
]
}
big-potato-91793
06/01/2022, 2:16 PMaws classic
provider of pulumi.
For some reason, even if I add dependsOn
my primary RDSCluster
to point to the GlobalCluster
it seems that pulumi never really wait that the cluster has been created. So instead of having a cluster with multiple regions, we god cluster that are separated and never linked together.
Any idea of what Iām doing wrong?fast-florist-41572
06/01/2022, 2:19 PMKeyName
from an ec2 instance created and it refuses to see it being deleted. Even if I do a refresh on the state. I am able to change it and it knows it has changed.fast-florist-41572
06/01/2022, 2:23 PMfamous-needle-81667
06/01/2022, 3:06 PMuser_data = base64encode(
templatefile("../templates/bash_script.sh.tftpl", {
internal_lb_dns_name = aws_lb.aws-internal-load-balancer.dns_name
}
)
)
How to achieve similar results with pulumi?
If it was not a templated file it would be easy, namely:
UserData: pulumi.StringPtr(base64.StdEncoding.EncodeToString(bashScriptContent))
However, I need to inject some variables into that script that will be known once some resources got created.
Any help would be appreciated, this is a part of my Master's Degree thesis and it would be a shame that this cannot be solved in Pulumi š
//EDIT, I'm writing in Golangquick-telephone-15244
06/01/2022, 5:32 PMpulumi plugin ls
as well as dumping requirements.txt
(and checking my venv by activating it and running pip freeze
) all report pulumi-aws
as 5.6.0
however when i try to pulumi up
it keeps trying to use default_5_4_0
which is no buenoquick-telephone-15244
06/01/2022, 5:32 PMbig-potato-91793
06/01/2022, 7:49 PMaws.rds.GlobalCluster
resource. I assign it to my aws.rds.Cluster
using the outputs of the first resource. For some reason, pulumi create the Global cluster but seems to never pass the globalClusterIdentifier
to the aws.rds.Cluster
resulting at having regional cluster.clean-rose-47860
06/02/2022, 12:36 PMexport const appSettingsKeyArn = environmentStackRef.getOutput("appSettingsKeyArn");
var param = new aws.ssm.Parameter('name', {
name: 'name',
type: 'String',
value: 'value',
keyId: appSettingsKeyArn,
});
Everything in the stack is created fine, the system is working as it should be. However, everytime I run pulumi up
it is attempting to update the SSM parameter resources with a new keyId
Looking in the details of the plan, I can see that the key id is never stored in state, so pulumi is constantly trying to update it.
Does anybody know why this is happening?straight-laptop-81153
06/03/2022, 1:14 AMquaint-book-39362
06/05/2022, 1:18 AMaws.lambda.CallbackFunction
? learning pulumi and found i can use the lambda runtimeās v2 sdk through `aws.sdk`(from @pulumi/aws
)quaint-book-39362
06/05/2022, 1:18 AMfull-receptionist-30203
06/05/2022, 6:46 PMquaint-guitar-13446
06/06/2022, 7:01 AMephemeralStorage
. I have updated to @pulumi/awsx:1.0.0-beta.7
so that I can try to leverage support for ephemeral storage.
I'm running into an issue when provisioning:
error: Error: invocation of aws:ec2/getVpc:getVpc returned an error: invoking aws:ec2/getVpc:getVpc: 1 error occurred:
* no matching EC2 VPC found
// snipped
Error: failed to register new resource REDACTED [awsx:ecs:FargateService]: 2 UNKNOWN: invocation of aws:ec2/getVpc:getVpc returned an error: invoking aws:ec2/getVpc:getVpc: 1 error occurred:
* no matching EC2 VPC found
Neither one of awsx.ecs.FargateService
or aws.ecs.Cluster
can receive a vpc
property.quaint-guitar-13446
06/06/2022, 7:02 AMvpc
property. So I'm a bit lost herefull-receptionist-30203
06/06/2022, 8:24 PMpulumi.FileArchive
at the same path?purple-megabyte-83002
06/07/2022, 7:29 PMpulumi up
it will also build and deploy a new docker image. As expected from my pulumi script, but my intention this time it just to tweak the setting not to deploy a new docker image of the same code, what is the recommended way to handle such a case ?purple-megabyte-83002
06/07/2022, 10:18 PM// Step 3: Build and publish a Docker image to a private ECR registry.
const ApiImage = awsx.ecs.Image.fromDockerBuild('api-image', {
context: path.resolve(__dirname, '../../../../'),
dockerfile: path.resolve(__dirname, '../../', 'Dockerfile'),
});
Is there a way to make it build a new docker image only if the content of the context has changed ? like storing a SHA256 hash of the folder contents and checking against it to decide to build or reuse last one ?purple-megabyte-83002
06/07/2022, 10:20 PM