damp-pillow-67781
12/06/2018, 10:32 PMdamp-pillow-67781
12/07/2018, 12:12 AMfaint-motherboard-95438
12/07/2018, 1:29 AMIssuer
, ClusterIssuer
, Certificate
, etc) that pulumi don’t know about (or at least I didn’t find the library that do) and I don’t know what to do from here. Should I create `CustomResource`s for those myself ? But if so I miss the needed knowledge to make it properly (I tried and got something kind of working but there’s things weird, like it doesn’t detect changes in the specs of some components I’ve made).careful-van-85195
12/07/2018, 4:54 PMcareful-van-85195
12/07/2018, 4:54 PMcareful-van-85195
12/07/2018, 6:51 PMacoustic-magazine-13505
12/10/2018, 9:53 AMaws.apigateway.x.API
with a cognito user pool authorizer? The closest I've found was in this slack from @white-balloon-205 3 months ago.early-musician-41645
12/10/2018, 7:26 PM@Pulumi/aws
I've created a Role and a Policy, and used both RolePolicyAttachment and also PolicyAttachment, but have no luck in attaching a policy to the role. What's the way to attach policies to roles?early-musician-41645
12/10/2018, 9:50 PMearly-musician-41645
12/10/2018, 10:08 PM@pulumi/pulumi
. Is it in a different package?
index.ts(19,27): error TS2339: Property 'StackReference' does not exist on type 'typeof import("/home/tsi/eshamay/git/mustang/sdp-mustang-terraform/pulumi/eks-cluster/node_modules/@pulumi/pulumi/index")'.
bitter-oil-46081
12/10/2018, 11:02 PMI don't have a StackReference class in my latest pull ofNo, it should be in that package. What version of the. Is it in a different package?@pulumi/pulumi
@pulumi/pulumi
package ended up getting resolved?full-dress-10026
12/10/2018, 11:11 PMinfra.x.ecs.Cluster
?full-dress-10026
12/11/2018, 12:02 AMinfra.x.ecs.EC2Service
to be assigned a public IP?full-dress-10026
12/11/2018, 12:07 AMEC2Service
is to not assign a public IP. Because tasks are not assigned a public IP, they cannot access the internet.full-dress-10026
12/11/2018, 12:09 AMEC2Service
, but it did not work:
networkConfiguration: {
assignPublicIp: true,
subnets: cluster.network.subnetIds,
}
full-dress-10026
12/11/2018, 12:26 AMfull-dress-10026
12/11/2018, 1:36 AMfull-dress-10026
12/11/2018, 1:43 AMassociatePublicIpAddress: true
in launchConfigurationArgs
when calling createAndAddAutoScalingGroup
. Still no change to the Auto-assign public IP field.careful-van-85195
12/11/2018, 3:04 PMfaint-motherboard-95438
12/11/2018, 4:57 PMerror: Plan apply failed: ingresses.extensions is forbidden: User "REDACTED" cannot create ingresses.extensions in the namespace "default": Required "container.ingresses.create" permission.
(same problem for a pulumi refresh
related to the permission container.ingresses.get
)
given that :
- the user showed here is the right one with more permissions than needed assigned to it (container.admin
and editor
amongst them, which are more than enough)
- gcloud auth
and gcloud config
show me the right account is selected
- the key exported by GOOGLE_APPLICATION_CREDENTIALS
is also the right one for this user (I even compared the id with the one in the web console to be sure)
- the pulumi stack selected is the right one too
I definitely have something wrong here, but I can’t spot it and I didn’t change a thing related to the service account or auth related.brave-helicopter-9976
12/11/2018, 7:26 PMpulumi new aws-javascript
and successfully previewed the changes. My original motivation for this was to experiment with calling the getAmi
function to view some example result data, as well as its error behavior. My first attempt:
$ node
> const aws = require("@pulumi/aws");
{ Error: Missing required configuration variable 'aws:region'
please set a value using the command `pulumi config set aws:region <value>`
at Object.requireWithDefault (/Users/jtilles/Developer/Scratch/pulumi-experiment/node_modules/@pulumi/utilities.ts:45:16)
at exports.region.utilities.requireWithDefault (/Users/jtilles/Developer/Scratch/pulumi-experiment/node_modules/@pulumi/config/vars.ts:44:78)
at Config.require (/Users/jtilles/Developer/Scratch/pulumi-experiment/node_modules/@pulumi/pulumi/config.js:141:19) __pulumiRunError: true, key: 'aws:region' }
After some digging into Pulumi’s source code, I eventually figured out that I needed to export PULUMI_CONFIG='{"aws:config:region":"us-east-1"}'
. I progressed to a new error!
$ PULUMI_CONFIG='{"aws:config:region":"us-east-1"}' node
> const aws = require("@pulumi/aws");
undefined
> aws.getAmi({owners: ["amazon"], filters: [{name: "name", values: ["amzn2-ami-hvm-2.*-x86_64.gp2"]}], mostRecent: true})
Promise {
<pending>,
domain:
Domain {
domain: null,
_events:
[Object: null prototype] {
removeListener: [Function: updateExceptionCapture],
newListener: [Function: updateExceptionCapture],
error: [Function: debugDomainError] },
_eventsCount: 3,
_maxListeners: undefined,
members: [] } }
> (node:15240) UnhandledPromiseRejectionWarning: Error: Pulumi program not connected to the engine -- are you running with the `pulumi` CLI?
at Object.getMonitor (/Users/jtilles/Developer/Scratch/pulumi-experiment/node_modules/@pulumi/pulumi/runtime/settings.js:87:19)
at Object.<anonymous> (/Users/jtilles/Developer/Scratch/pulumi-experiment/node_modules/@pulumi/pulumi/runtime/invoke.js:51:40)
at Generator.next (<anonymous>)
at fulfilled (/Users/jtilles/Developer/Scratch/pulumi-experiment/node_modules/@pulumi/pulumi/runtime/invoke.js:17:58)
at process.internalTickCallback (internal/process/next_tick.js:77:7)
(node:15240) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:15240) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Since the second error didn’t look particularly debuggable (from the looks of it, I’m guessing I would need to launch a gRPC server in order to use the SDK from the REPL) I decided it was time to ask for help.careful-van-85195
12/11/2018, 7:41 PMfull-dress-10026
12/11/2018, 7:55 PMfull-dress-10026
12/11/2018, 9:41 PMfull-dress-10026
12/11/2018, 9:59 PMname
on a aws.elasticloadbalancingv2.TargetGroup
then update the port
to a different value, I get this message:
Plan apply failed: Error creating LB Target Group: DuplicateTargetGroupName: A target group with the same name 'fibonacci-dev' exists, but with different settings
status code: 400, request id: db1ef58c-fd8f-11e8-be77-fbf35d396053
full-dress-10026
12/11/2018, 10:02 PMaws.autoscaling.Group
.full-dress-10026
12/11/2018, 11:07 PMuserData
of aws.ec2.LaunchTemplate
do not trigger an update.full-dress-10026
12/11/2018, 11:59 PMlet lbSecurityGroup = new aws.ec2.SecurityGroup("fib-lb-sg", {
namePrefix: "fib-lb-sg",
ingress: [{
protocol: "tcp",
fromPort: 80,
toPort: 80,
cidrBlocks: ["0.0.0.0/0"]
}],
egress: [{
protocol: "-1",
fromPort: 0,
toPort: 0,
cidrBlocks: ["0.0.0.0/0"]
}],
vpcId: network.vpcId
});
After running pulumi up
, I get this message:
updating urn:pulumi:fibonacci-dev::fibonacci::aws:ec2/securityGroup:SecurityGroup::fib-lb-sg: from_port (80) and to_port (80) must both be 0 to use the 'ALL' "-1" protocol!
Not sure what it is talking about.high-morning-17948
12/12/2018, 5:42 AMfaint-motherboard-95438
12/12/2018, 2:21 PMerror: Plan apply failed: ingresses.extensions "[ingress name]" already exists
Would you guys have any idea why pulumi does not manage properly an update of an ingress on gcp ? it seems it wants to recreate it and fails while it knows that’s an update :
+- └─ kubernetes:extensions:Ingress [ingress name] **replacing failed** [diff: ~provider]; 1 error
Did I misconfigure something ? Everything else in my cluster is working and updating as expectedfaint-motherboard-95438
12/12/2018, 2:21 PMerror: Plan apply failed: ingresses.extensions "[ingress name]" already exists
Would you guys have any idea why pulumi does not manage properly an update of an ingress on gcp ? it seems it wants to recreate it and fails while it knows that’s an update :
+- └─ kubernetes:extensions:Ingress [ingress name] **replacing failed** [diff: ~provider]; 1 error
Did I misconfigure something ? Everything else in my cluster is working and updating as expected