icy-london-58403
02/18/2020, 8:27 AMpulumi destroy
command. But if I comment out one of my custom resources, it tries to connect to it with default provider values. I pass the provider a token and a url. I can tell by the errors that it is defaulting to localhost port 80 with no token. So it seems the provider doesn't hold onto these values in the state and it needs the code to know these values during deletions.
Is this normal behaviour or is there something I can do to enhance my setup?handsome-actor-1155
02/18/2020, 2:50 PMcloud
for true cross-cloud infrastructure definition
This seems really exciting and I'm just curious to what level you're wanting to take it? A single cloud
provider for all common cloud components like k8s, storage, compute, etc? If so, that seems like it would be a game changerloud-whale-26029
02/18/2020, 4:12 PMlimited-rainbow-51650
02/18/2020, 4:16 PMcool-egg-852
02/18/2020, 5:04 PMthankful-gpu-3329
02/18/2020, 6:54 PMthankful-gpu-3329
02/18/2020, 6:54 PMthankful-gpu-3329
02/18/2020, 6:55 PMpulumi login <gs://name-of-my-relevant-bucket>
but am running into issues. I created a new service account and have set GOOGLE_CREDENTIALS
with the contents of the associated/generated key in my shell, but the cli is still showing the same warning it was prior to creating the service account.thankful-gpu-3329
02/18/2020, 6:55 PMwarning: Pulumi will not be able to print a statefile permalink using these credentials. Neither a GoogleAccessID or PrivateKey are available. Try using a GCP Service Account.
Logged into MBBlack.local as joeyfigaro (<gs://grpc-dummy-stuff-stack>)
thankful-gpu-3329
02/18/2020, 6:58 PMthankful-gpu-3329
02/18/2020, 6:58 PMthankful-gpu-3329
02/18/2020, 6:59 PMGOOGLE_CREDENTIALS
had the expected stuff in it. Turns out it didn't. 😛thankful-gpu-3329
02/18/2020, 6:59 PMexport GOOGLE_CREDENTIALS=$(cat .keystuff/name-of-creds.json)
instead of export GOOGLE_CREDENTIALS=$(cat ./keystuff/name-of-creds.json)
victorious-xylophone-55816
02/18/2020, 8:21 PMvictorious-xylophone-55816
02/18/2020, 8:22 PMvictorious-xylophone-55816
02/18/2020, 8:27 PMcool-egg-852
02/18/2020, 10:40 PMup
non-interactively?bright-orange-69401
02/19/2020, 8:29 AMResource
configuration in Pulumi ?
I'm trying to set up an SSO between Okta and AWS using Pulumi, and that requires 3 steps with 2 Resources
:
1. Create <http://okta.App|okta.App>
which generates metadata
2. Inject metadata and create aws.iam.IdentityProvider
, which has an ARN
3. Inject IdentityProvider
ARN back into the <http://okta.App|okta.App>
Resource created in step 1
I'm trying to pack this logic into a Component
but I struggle with step 3: can't seem to update a Resource
as part of a Component
Also tried using _import
but there's an URN conflict because the imported resource (step 3) would actually be the same as step 1...
Should I try creating a Dynamic Provider for that ? Can't it be done natively ?quiet-painter-30539
02/19/2020, 1:16 PMbetter-rainbow-14549
02/19/2020, 1:42 PMcalm-quill-21760
02/19/2020, 7:38 PMlet vpcList = vpcConfig.map(entry => {
return new aws.ec2.Vpc(entry.name, {cidrBlock: entry.cidrBlock, tags: {Name: entry.name}});
});
let vpcNameToId: { [index: string]: any } = {};
for (let vpc of vpcList) {
// create a lookup
const vpcName = vpc.tags.apply(v => v?.Name ?? null);
vpcName.apply(theName => {
console.log("Applying " + theName + "=" + vpc.id.apply(v => `${v}`));
// vpcNameToId[theName] = vpcId;
});
}
Results in:
Applying vpc0=Calling [toString] on an [Output<T>] is not supported.
To get the value of an Output<T> as an Output<string> consider either:
1: o.apply(v => `prefix${v}suffix`)
2: pulumi.interpolate `prefix${v}suffix`
See <https://pulumi.io/help/outputs> for more details.
This function may throw in a future version of @pulumi/pulumi.
icy-london-58403
02/19/2020, 9:10 PMbetter-actor-92669
02/20/2020, 9:40 AMpulumi-gcp
module to create a CloudSQL DB Instance https://github.com/pulumi/pulumi-gcp/blob/master/sdk/python/pulumi_gcp/sql/database_instance.py. Since pulumi-postgresql
connects to an instance similarly to pgsql, I define PGHOST
, PGUSER
, and PGPASSWORD
during Pulumi runtime. Since the CloudSQL Instance is created via the same execution, I define dependencies like:
opts=ResourceOptions(
depends_on=[cloud_pgsql_main_1],
),
Nevertheless, it doesn't seem to work as it tries to connect to the instance immediately, however the instance is obviously not ready, and pulumi up
fails. Do you think it is possible that two separate modules pulumi-gcp
and pulumi-postgresql
do not appropriately share dependencies during runtime?quiet-wolf-18467
02/20/2020, 11:14 AMbitter-dentist-28132
02/20/2020, 5:42 PMpulumi.runtime.listResourceOutputs(k8s.apps.v1.Deployment.isInstance)
to collect information about the currently-deployed deployments. that project was shelved, and when i came back to it recently i discovered that it no longer works. i see that signature now asks for a type, so I gave it k8s.apps.v1.Deployment
but it still fails. does anyone know why this might be?stocky-student-96739
02/20/2020, 5:58 PMincalculable-portugal-13011
02/20/2020, 6:31 PMlet appVpc = aws.ec2.getVpc({id: "my-vpc-id"});
const webServerLoadBalancer = new awsx.lb.ApplicationLoadBalancer("web-server-lb-" + userEnv, {
securityGroups: [],
vpc: appVpc,
subnets: ["subnet-1", "subnet-2", "subnet-3"]
});
const webServerLoadBalancerListener = webServerLoadBalancer.createListener("ws-https-" + userEnv, {
port: 443,
protocol: "HTTPS",
certificateArn: "my-cert-arn"
});
const webServerLoadBalancerRedirectToHttpsListener = webServerLoadBalancer.createListener("ws-redirect-to-https", {
port: 80,
protocol: "HTTP",
defaultAction: {
type: "redirect",
redirect: {
protocol: "HTTPS",
port: "443",
statusCode: "HTTP_301"
}
}
});
const webServerCluster = new awsx.ecs.Cluster("web-server-" + userEnv, {
securityGroups: ["sg-1"],
vpc: appVpc
});
const webServerFargateService = new awsx.ecs.FargateService("web-server-" + userEnv, {
cluster: webServerCluster,
networkConfiguration: {
subnets: ["subnet-1", "subnet-2", "subnet-3"]
},
taskDefinitionArgs: {
containers: {
webServer: {
image: "my-org/web-server:" + userEnv,
portMappings: [
webServerLoadBalancerListener
],
healthCheck: {...healthCheckArgs}
}
}
}
});
the error I’m receiving is that error: aws:ecs/service:Service resource 'web-server-dev' has a problem: "network_configuration.0.subnets": required field is not set
, which doesn’t make sense to me. per the docs, I’m setting the networkConfiguration
property of the service, and I’m tried both wrapping that property in an array and as an object. no dice either way. any thoughts?able-zoo-58396
02/20/2020, 8:18 PM// create cluster
const cluster = new awsx.ecs.Cluster(`cluster`, {
name: `demo-cluster`
});
// create image
const img = awsx.ecs.Image.fromDockerBuild(`image`, {
context: './app',
});
// create task
const task = new awsx.ecs.FargateTaskDefinition(`task`, {
container: {
image: img,
memoryReservation: 2048
}
});
// create the cloudwatch event and lambda function using the "onSchedule" helper function
aws.cloudwatch.onSchedule(`task-schedule`, 'rate(5 minutes)',
async (req) => {
// run the task in our cluster
const result = await task.run({cluster});
return { statusCode: 200, body: "OK" };
}
);
Everything seems to build and deploy to AWS correctly. I see the Lambda function, CloudWatch event, logs, task definition, etc. It's all there and linked up. And when I look at my CloudWatch logs for that Lambda function, I see that it's attempting to run every 5 minutes.
However, I'm getting this error when it runs:
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module '@pulumi/awsx/ecs/index.js'\nRequire stack:\n- /var/task/__index.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js",
"stack": [
"Runtime.ImportModuleError: Error: Cannot find module '@pulumi/awsx/ecs/index.js'",
"Require stack:",
"- /var/task/__index.js",
"- /var/runtime/UserFunction.js",
"- /var/runtime/index.js",
" at _loadUserApp (/var/runtime/UserFunction.js:100:13)",
" at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)",
" at Object.<anonymous> (/var/runtime/index.js:43:30)",
" at Module._compile (internal/modules/cjs/loader.js:955:30)",
" at Object.Module._extensions..js (internal/modules/cjs/loader.js:991:10)",
" at Module.load (internal/modules/cjs/loader.js:811:32)",
" at Function.Module._load (internal/modules/cjs/loader.js:723:14)",
" at Function.Module.runMain (internal/modules/cjs/loader.js:1043:10)",
" at internal/main/run_main_module.js:17:11"
]
}
During the Pulumi "magic" of packaging and building the Lambda handler, it looks like it's telling Lambda to look for some Pulumi modules that aren't installed. Any ideas on why this is happening?
Thank you, thank you!!able-zoo-58396
02/20/2020, 8:29 PMtask
in the Lamdba callback that I'm creating with onSchedule
.
So, this DOES work:
const task = new awsx.ecs.FargateTaskDefinition(`task`, {
container: {
image: img,
memoryReservation: 2048
}
});
aws.cloudwatch.onSchedule(`task-schedule`, 'rate(5 minutes)',
async (req) => {
console.log('Is this thing on?')
return { statusCode: 200, body: "OK" };
}
);
But as soon as I reference the task
in the callback, I get errors about missing modules. Even if I'm not trying to run the task:
aws.cloudwatch.onSchedule(`task-schedule`, 'rate(5 minutes)',
async (req) => {
console.log(task); // just try to log the object -- don't even try to run it
return { statusCode: 200, body: "OK" };
}
);
This is the error that Lambda throws when it tries to run the callback:
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module '@pulumi/awsx/ecs/index.js'\nRequire stack:\n- /var/task/__index.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js",
...
}
So, it seems like is's an issue with how Pulumi is packaging the Lambda function, right?
I'll add that the container image doesn't contain any references to Pulumi or dependencies, so the error probably isn't related to the image. I'm using the same image on other Fargate Services created by Pulumi, and it's running fine.incalculable-portugal-13011
02/20/2020, 10:46 PMimage
property something that implements ContainerImageProvider
, but there’s no way in awsx.ecr
to actually pull an existing image – only to create one via buildAndPushImage
. if I use aws.ecr.getImage
, it returns a getImageResult
which is incompatible as it doesn’t implement ContainerImageProvider
incalculable-portugal-13011
02/20/2020, 10:46 PMimage
property something that implements ContainerImageProvider
, but there’s no way in awsx.ecr
to actually pull an existing image – only to create one via buildAndPushImage
. if I use aws.ecr.getImage
, it returns a getImageResult
which is incompatible as it doesn’t implement ContainerImageProvider
rhythmic-camera-25993
02/21/2020, 2:21 AMincalculable-portugal-13011
02/21/2020, 7:16 PMrhythmic-camera-25993
02/21/2020, 7:21 PM[REPO]/image[:TAG]
, where the repo and the tag are optional. If no repo is specified, the public docker hub is assumed. Here you need to provide the full ECR repo pathincalculable-portugal-13011
02/21/2020, 7:33 PMrhythmic-camera-25993
02/21/2020, 7:35 PM