abundant-airplane-93796
06/14/2019, 1:52 PMbrave-salesmen-42327
06/14/2019, 3:51 PMdocker
as docker -tlsverify=false
and it works, but doesn't work when a pulumi script is running.
Any suggestions welcome, not limited to turning off TLS verification in the pulumi API calls to docker.most-judge-33290
06/14/2019, 4:53 PMconst alb = new awsx.elasticloadbalancingv2.ApplicationLoadBalancer(
"net-lb",
{
external: true,
securityGroups: cluster.securityGroups
}
);
const web = alb.createListener("web", { port: 80, external: true });
// Step 4: Create a Fargate service task that can scale out.
const appService = new awsx.ecs.FargateService("scolasticus_api", {
cluster,
taskDefinitionArgs: {
container: {
image: "scolasticus_api",
cpu: 256 /*10% of 1024*/,
memory: 256 /*MB*/,
portMappings: [web]
}
},
desiredCount: 1
});
dazzling-memory-8548
06/14/2019, 5:36 PMimport * as pulumi from "@pulumi/pulumi";
import * as docker from "@pulumi/docker";
const config = new pulumi.Config("secrets-demo");
const containerImage = config.requireSecret("containerImage");
// This exposes the secret in the stack export.
export const remoteImage = new docker.RemoteImage("myImage", {
name: containerImage
});
const container = new docker.Container(
"myContainer",
{
image: containerImage
},
{
dependsOn: remoteImage
}
);
// This correctly encrypts the secret.
export const imageName = remoteImage.name;
proud-alarm-92546
06/14/2019, 7:34 PMlifecycle
block actually supported? how would it be passed? a lot of docu references it, but I also realize that docu is generated from the tf docs. https://pulumi.io/reference/pkg/nodejs/pulumi/aws/ecs/
You can utilize the generic Terraform resource lifecycle configuration block with ignore_changes to create an ECS service with an initial count of running instances, then ignore any changes to that count caused externally
...the example in the tf docs for this section shows the lifecycle block, but the example in the pulumi version lacks it, making me thing it's being excluded in the autogen somewhere.......busy-umbrella-36067
06/14/2019, 9:14 PMnice-airport-15607
06/14/2019, 10:40 PMpulumi up
?sparse-tomato-56640
06/15/2019, 4:53 PMpulumi.getStack()
?average-dream-51210
06/16/2019, 12:50 AMaverage-dream-51210
06/16/2019, 12:50 AMaverage-dream-51210
06/16/2019, 3:07 AMaverage-dream-51210
06/16/2019, 3:08 AMaverage-dream-51210
06/16/2019, 3:08 AMaverage-dream-51210
06/16/2019, 3:08 AMError: Missing required property 'userPoolId'
on the UserPoolDomain fileaverage-dream-51210
06/16/2019, 3:09 AMaverage-dream-51210
06/16/2019, 3:11 AMa dependsOn: userPool
in the userPoolDomain file but still no goaverage-dream-51210
06/16/2019, 8:49 PMgifted-teacher-6458
06/17/2019, 2:03 AMcold-coat-35200
06/17/2019, 8:38 AMlimited-rainbow-51650
06/17/2019, 10:03 AMthousands-telephone-86052
06/17/2019, 4:37 PMbored-river-53178
06/17/2019, 6:21 PMnice-airport-15607
06/17/2019, 6:34 PMUNKNOWN: failed to compute archive hash
?big-glass-16858
06/17/2019, 7:58 PMbored-river-53178
06/17/2019, 9:01 PMtall-librarian-49374
06/18/2019, 6:53 AMfaint-motherboard-95438
06/18/2019, 2:27 PMDeployment
configuration, I noticed it creates a new replicaset and does not delete the old one (or maybe it should update it, I don’t know). So it actually kills all the old pods in the old replicaset and creates the new one as expected, but now I have a stale useless replicasets.
Is this something I misconfigured somewhere or is this a bug ?
replicaset.apps/context-clusters-pvmtest-mqtt-5979fbcdbc 2 2 2 2m13s
replicaset.apps/context-clusters-pvmtest-mqtt-client-68fd4d4445 0 0 0 23m
replicaset.apps/context-clusters-pvmtest-mqtt-client-6b8bdfc4b7 2 2 2 2m13s
replicaset.apps/context-clusters-pvmtest-mqtt-f5bd9548f 0 0 0 25m
replicaset.apps/context-clusters-pvmtest-portal-567948dccd 2 2 2 2m13s
replicaset.apps/context-clusters-pvmtest-portal-6c5968cfcd 0 0 0 26m
bulky-businessperson-73745
06/18/2019, 3:53 PMelegant-crayon-4967
06/18/2019, 4:20 PMindex.ts
for aws & typescript?