incalculable-engineer-92975
03/24/2020, 1:35 PMbitter-zebra-93800
03/24/2020, 5:39 PMbusy-magazine-48939
03/26/2020, 4:37 AMbright-orange-69401
03/26/2020, 10:00 AMsourceCodeHash
of AWS Lambda ?
Earlier I posted a Python function that replicates the filebase64sha256
function of Terraform so that I can use the sourceCodeHash
to know if my Lambda (or its Layer) actually needs to be updated by Pulumi or not
Today, I realised that my weeks of struggling are due to zip files being undeterministic : if you zip the same file twice you'll have 2 checksums 😞
Making a zip deterministic is difficult due to the metadata included in it, so I found that the only way to make a zip archive checksum is to sum the CRC or each file, and I made a function that does exactly that (called it zipbase64sha256
)
Unfortunaly, every time I used my function to populate sourceCodeHash
, I realize that my checksum gets written over by Pulumi (which I suspect uses the regular, non-deterministic, filebase64sha256
of Terraform)
How should I go about making a deterministic checksum so that if I build my archive again with the same content, Pulumi doesn't nag me to upload it to AWS Lambda ?astonishing-gpu-12842
03/26/2020, 9:46 PMconst endpoint = new awsx.apigateway.API("mapboxQuery", {
routes: [
{
path: "/",
method: "GET",
eventHandler: (request, ctx, cb) => {
const AWS = require("aws-sdk");
const ddb = new AWS.DynamoDB.DocumentClient({
apiVersion: "2012-10-08"
});
const tableName = assetTable.name.value;
const params = {
TableName: tableName
};
<!-- code -->
}
}
],
stageName: "dev"
});
I just get this in the output:
handler : "__index.handler"
memorySize : 128
name : "mapboxQuery4c238266-c28e6f7"
publish : false
reservedConcurrentExecutions: -1
role : output<string>
runtime : "nodejs8.10"
timeout : 180
breezy-gold-44713
03/28/2020, 1:40 AMaws --endpoint-url=<http://internalserver.com> s3 ls --profile internal-profile
Can I configure pulumi in such a way that I can leverage an --endpoint-url override for logging in and storing our stacks?flat-parrot-25697
03/30/2020, 8:01 AMacceptable-stone-35112
03/30/2020, 3:08 PMquiet-morning-24895
03/30/2020, 3:23 PMmetricQueries
option when creating an alarm from an existing metric, but I'm not finding much documentation. Thanks in advance!some-kitchen-64615
03/30/2020, 10:18 PMfuture-diamond-16840
03/31/2020, 7:36 PMbest-lamp-76503
04/01/2020, 9:38 AMcalm-parrot-72437
04/02/2020, 12:05 AMbitter-dentist-28132
04/02/2020, 2:57 PMincalculable-engineer-92975
04/03/2020, 6:13 PMlimited-rainbow-51650
04/04/2020, 5:47 PMbitter-zebra-93800
04/04/2020, 11:54 PMDiagnostics:
aws:lb:TargetGroup (api-tg1):
error: deleting urn:pulumi:api-server::api-server::aws:lb/targetGroup:TargetGroup::api-tg1: Error deleting Target Group: ResourceInUse: Target group 'arn:aws:elasticloadbalancing:us-west-2:353450002364:targetgroup/api-tg1-2bf40c6/0382bcb6f8b3b67e' is currently in use by a listener or a rule
Is there a right way to pause a listener or rule so its not in use?incalculable-engineer-92975
04/06/2020, 6:02 PMacceptable-stone-35112
04/08/2020, 8:51 AMlimited-rainbow-51650
04/08/2020, 12:44 PMglamorous-printer-14057
04/08/2020, 3:58 PMpulumi up
always shows the secret fields as needing an update even when unchangedflaky-baker-91034
04/08/2020, 7:13 PM"use strict";
const aws = require("@pulumi/aws");
const awsx = require("@pulumi/awsx");
const projet = "ecs-ec2";
const vpc = new awsx.ec2.Vpc(projet);
const cluster = new awsx.ecs.Cluster(projet, { vpc });
const asg = cluster.createAutoScalingGroup("custom", {
templateParameters: { minSize: 2 },
launchConfigurationArgs: { instanceType: "t3.medium" },
});
const nlb = new <http://awsx.lb|awsx.lb>.NetworkLoadBalancer("nlb", { vpc, external: true });
const listener = nlb.createListener("listener", { port: 80 });
const ec2Service = new awsx.ecs.FargateService("ec2-nginx", {
cluster,
desiredCount: 2,
taskDefinitionArgs: {
containers: {
nginx: {
image: "nginx",
memory: 128,
portMappings: [listener],
},
},
},
});
exports.endpoint = listener.endpoint.hostname;
Tried it 3 times, Twice in us-east-2. Once in us-east-1. Tried with t2.medium and t3.medium as well. It always fails with following message:
Diagnostics:
aws:cloudformation:Stack (custom):
error: 1 error occurred:
* creating urn:pulumi:prod::aws-ecs-ec2::awsx:x:ecs:Cluster$awsx:x:autoscaling:AutoScalingGroup$aws:cloudformation/stack:Stack::custom: ROLLBACK_COMPLETE: ["The following resource(s) failed to create: [Instances]. . Rollback requested by user." "Received 0 SUCCESS signal(s) out of 2. Unable to satisfy 100% MinSuccessfulInstancesPercent requirement"]
pulumi:pulumi:Stack (aws-ecs-ec2-prod):
error: update failed
Most of it is taken from here: https://github.com/pulumi/pulumi-awsx/tree/master/nodejs/awsx/ecs
Thanksquaint-jelly-95055
04/10/2020, 2:03 AMquaint-jelly-95055
04/10/2020, 2:03 AM// Step 4: Create a Fargate service task that can scale out.
const appService = new awsx.ecs.FargateService("app-svc", {
cluster,
taskDefinitionArgs: {
container: {
image: img,
cpu: 102 /*10% of 1024*/,
memory: 50 /*MB*/,
portMappings: [{ containerPort: 8000, }],
},
},
desiredCount: 5,
});
chilly-hairdresser-56259
04/10/2020, 1:48 PMchilly-hairdresser-56259
04/10/2020, 1:53 PMcalm-parrot-72437
04/11/2020, 12:09 AMquaint-jelly-95055
04/12/2020, 10:08 AMadventurous-jordan-10043
04/15/2020, 9:12 AM// Create a VPC for our cluster.
const vpc = new awsx.ec2.Vpc("my-vpc");
const allVpcSubnets = vpc.privateSubnetIds.concat(vpc.publicSubnetIds);
awsx.ec2.Vpc
is expecting a second args so this will not work
vpc.privateSubnetIds
is a promise and you can’t concat Promises. What is going on? Is this doc outdated?bitter-zebra-93800
04/16/2020, 5:16 PM