ive got a resource that depends on a s3-bucketobje...
# general
a
ive got a resource that depends on a s3-bucketobject, but when the s3 bucket object changes its content, the resource that depends on it doesnt update. is there a way of fixing that?
e
Does the other resource depend on the actual content of the bucket object? Or just it’s id or similar? Is the content being changed by Pulumi, or by an external system?
a
Copy code
localRunCommand, err := local.Run(ctx, &local.RunArgs{
        Command: fmt.Sprintf("find . -name '*.vhd'"),
})

bucketObject, err := s3.NewBucketObject(ctx, "test", &s3.BucketObjectArgs{
        Bucket: bucket.ID(), // reference to the s3.Bucket object
        Source: pulumi.NewFileAsset(*localRunCommand.Stdout),
})

snapshotImport, err := ebs.NewSnapshotImport(ctx, "snapshot", &ebs.SnapshotImportArgs{
        DiskContainer: &ebs.SnapshotImportDiskContainerArgs{
                Format: pulumi.String("VHD"),
                UserBucket: &ebs.SnapshotImportDiskContainerUserBucketArgs{
                        S3Bucket: bucket.Bucket,
                        S3Key:    bucketObject.Key,
                },
        },
        RoleName: policyAttachment.Role,
}, pulumi.Parent(bucketObject), pulumi.DependsOn([]pulumi.Resource{bucketObject}))
this is how its tied together now
e
So “dependsOn” only tracks resource IDs not resource properties (like content). And changing a bucket objects content doesn’t re-create the bucket object itself (so ID doesn’t change). I think you could add replaceOnChanges to the bucket object so that it would trigger a full delete/create when the content changes and that would then propagate a replace to the SnapshotImportArgs. The command resource has an explicit trigger property for triggering a replace like this based on values, I wonder if its worth to just make that an engine feature so that “dependsOn”) or a new option) could take values not just resource IDs…
a
thanks, now after adding
Copy code
bucketObject, err := s3.NewBucketObject(ctx, "test", &s3.BucketObjectArgs{
        Bucket: bucket.ID(),
        Source: pulumi.NewFileAsset(*localRunCommand.Stdout),
}, pulumi.ReplaceOnChanges([]string{"source"}))  // <-- this line
it seems on
pulumi up
bucketObject replaces itself. BUT, snapshotImport doesnt update at all, so i canceled the
pulumi up
Copy code
aws:s3:BucketObject  test  replace     [diff: ~source]
e
hmm I thought changing parent would be enough to trigger a replace
a
I guess I could use replaceOnChange there also, but feels very hacky at that point
e
Might need to use --target-replace to trigger this,
or also add replaceOnChanges to the SnapshotImport for “diskContainer.userBucket.s3Key”
a
the bucketobject.key stays the same btw, even if source changes
the value of it atleast
even on replace
e
ah that might be what’s tripping the system up, it can’t see it’s an ID change at that level
try --target-replace, and maybe raise an issue about this problem. We can see if theres better ways to support this.
a
I used the
NewBucketObjectv2
which i can change keyname dynamically with, and that causes updates down the line fine
btw, uploading big objects to s3 is really slow, and sometimes pulumi just stops and hangs while creating the bucketobject. is there a nice solution to that?
e
That’s news to me, might be worth checking our aws repo on github and seeing if there’s already an issue for that, if not raise one. The engine ought to be able to handle large uploads without issue.
a
issue is probably on my end, uploading a 1.4gb file with 500 Mbps upload, but it takes ages, which is weird