full-dress-10026
11/28/2018, 12:12 AMThe stack name must be unique within within your account. As a best practice, prefix the stack name with a project name.
busy-umbrella-36067
11/28/2018, 4:42 AMorange-tailor-85423
11/28/2018, 4:36 PMfull-dress-10026
11/28/2018, 6:04 PMpA
and pB
, that need to be deployed. Before pB
can be deployed, I have some custom CLI stuff that needs to be run that depends on pA
outputs. I understand that right now it's not possible to hook into the lifecycle, and that it will likely be added as a feature in the future. As a workaround for now, I was told I could run the CLI stuff in an apply
that contains pA
outputs. How would I ensure that pB
does not start provisioning until the CLI stuff running in the apply
completes without exception?full-dress-10026
11/28/2018, 6:14 PMfull-dress-10026
11/28/2018, 6:40 PMpulumi.StackReference
?orange-tailor-85423
11/28/2018, 6:47 PMblue-dentist-627
11/29/2018, 2:40 AMadamant-restaurant-73893
11/29/2018, 7:33 PMbig-caravan-87850
11/29/2018, 8:00 PMfull-dress-10026
11/29/2018, 8:18 PMearly-musician-41645
11/29/2018, 8:33 PMResources:
+ 4 to create
6 unchanged
Do you want to perform this update? yes
Updating (s3-object-indexer-dev):
error: [409] Conflict: Another update is currently in progress.
How do I get out of the in-progress update?early-musician-41645
11/29/2018, 10:12 PMonPut
action in an S3 bucket to trigger a lambda using this code:
// A storage bucket
const bucket = new cloud.Bucket("eshamay-test-bucket");
const bucketName = bucket.bucket.id;
// Trigger a Lamda function when something is added
bucket.onPut("onNewObject", (bucketArgs: any) => {
console.log(`*** New Item in Bucket`);
console.log(bucketArgs);
});
However, I plan to trigger the lambda from multiple buckets and will create/update some DynamoDB items to index the contents of the various bucket objects across multiple regions.
Currently, the lambda above outputs something like this (taken directly from CloudWatch logs:
2018-11-29T12:52:08.159-08:00[onNewObject] { key: 'some/foo/path/myfile.ts', size: 384, eventTime: '2018-11-29T20:52:07.166Z' }
However, my indexing will need to create a couple extra attributes to track region, bucket name, and perhaps some other info of the put
such as the Principle ID, IP address, etc.
I know that S3 events emit something like this:
https://docs.aws.amazon.com/lambda/latest/dg/eventsources.html#eventsources-s3-put
Is there a way to get access to that original event object?
Also, is there a way to create a single lambda to do the DynamoDB update, and then create several buckets across regions that trigger that single lambda onPut
?microscopic-florist-22719
early-musician-41645
11/30/2018, 12:24 AMconst bucket = aws.s3.Bucket.get('prod-us-west-1',
'sdp-tsm-prod-uw1-artifacts',
{ region: 'us-west-1' });
But the configured provider is in us-west-2, hence:
Diagnostics:
aws:s3:Bucket (prod-us-west-1):
error: Preview failed: refreshing urn:pulumi:s3-object-indexer-dev::s3-object-indexer::aws:s3/bucket:Bucket::prod-us-west-1: error reading S3 Bucket (sdp-tsm-prod-uw1-artifacts): BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region
status code: 301, request id: , host id:
I tried to create a new regional provider but couldn't figure out how to plug it in:
let uw1Provider = new aws.Provider("us-west-1-aws-provider", { region: 'us-west-1' });
stocky-spoon-28903
11/30/2018, 12:53 AM{ provider: uw1Provider }
to each resource that needs to use itmicroscopic-florist-22719
get
stocky-spoon-28903
11/30/2018, 12:53 AM{ region: "us-west-1" }
in your examplestocky-spoon-28903
11/30/2018, 12:53 AMmicroscopic-florist-22719
new aws.s3.Bucket('prod-us-west-1', {}, { id: 'sdp-tsm-prod-uw1-artifacts', provider: uw1Provider })
stocky-spoon-28903
11/30/2018, 12:55 AMgetBucket
data source invoke, that has the option to pass in a providerstocky-spoon-28903
11/30/2018, 12:55 AMmicroscopic-florist-22719
full-dress-10026
11/30/2018, 1:12 AMfull-dress-10026
11/30/2018, 3:07 AMDockerBuild.cacheFrom
docs say this:
/**
* An optional CacheFrom object with information about the build stages to use for the Docker
* build cache. This parameter maps to the --cache-from argument to the Docker CLI. If this
* parameter is `true`, only the final image will be pulled and passed to --cache-from; if it is
* a CacheFrom object, the stages named therein will also be pulled and passed to --cache-from.
*/
What does this mean "If this parameter is true
, only the final image will be pulled and passed to --cache-from"?early-musician-41645
11/30/2018, 4:53 AMget
on an S3 bucket via the suggested code and some resources from my project were created, and some failed.
To get to the previous state I thought I'd do a pulumi destroy
but I see that the bucket I got is included in the delete:
Previewing destroy (s3-object-indexer-dev):
Type Name Plan
- pulumi:pulumi:Stack s3-object-indexer-s3-object-indexer-dev delete
- ββ aws:s3:BucketEventSubscription sdp-tsm-s3-artifact-indexer-onNewObject delete
- β ββ aws:lambda:Permission sdp-tsm-s3-artifact-indexer-onNewObject delete
- β ββ aws:iam:RolePolicyAttachment sdp-tsm-s3-artifact-indexer-onNewObject-32be53a2 delete
- β ββ aws:lambda:Function sdp-tsm-s3-artifact-indexer-onNewObject delete
- β ββ aws:iam:Role sdp-tsm-s3-artifact-indexer-onNewObject delete
- ββ aws:s3:Bucket prod-us-west-1 delete
- ββ pulumi:providers:aws us-west-1-aws-provider delete
Is this expected behavior?early-musician-41645
11/30/2018, 4:53 AM.get
added in as a tracked resource?early-musician-41645
11/30/2018, 4:54 AMtall-monitor-77779
11/30/2018, 2:14 PMfaint-motherboard-95438
11/30/2018, 3:52 PMerror: Plan apply failed: project: required field is not set
I found out that I can set it in the stack gcp:project
along with gcp:zone
and gcp:region
but that feels wrong and error prone to duplicate it here since I already have all of that set up in my gcloud config
, is there any way it can automatically detects the local active gcloud config ?
But anyway, even if I fill these values up in the stack to test it, I got an error :
error: Plan apply failed: googleapi: Error 403: Required "container.clusters.create" permission(s) for "projects/[...]". See <https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted> for more info., forbidden
while I have all the needed permissions to create cluster and had no issue so far to create anything else with pulumi.