average-cricket-30620
08/29/2022, 1:09 PMsteep-lamp-20408
08/29/2022, 3:44 PMaws.cloudfront.Distribution
), its arguments and the documentation about it:
https://www.pulumi.com/registry/packages/aws/api-docs/cloudfront/distribution/#distributionorigin
1. How can I specify an origin
(aws.cloudfront.DistributionOriginArgs
) for the aws.cloudfront.Distribution
where it would correspond to “origin access” = “access control settings” (recommended AWS setting - the circled in red option on my screenshot), and/or “origin access” = “legacy access identities”?
2. I have trouble understanding what is the origin_id
argument of aws.cloudfront.DistributionOriginArgs
on the same documentation page. Doc says “The unique identifier of the member origin”, but it’s very obscure to me. Is it the ARN of the S3 bucket? The ID of the S3 bucket? Something else?strong-helmet-83704
08/29/2022, 6:50 PMerror: inputs to import do not match the existing resource
but there is no trace of this resource in my current stack/state. How is Pulumi computing this? Is it referencing checkpoint history to find this historical resource?swift-fireman-31153
08/30/2022, 1:10 AMgentle-zoo-32137
08/30/2022, 3:26 PMvictorious-dusk-75271
08/31/2022, 2:00 PMswift-fireman-31153
08/31/2022, 9:15 PMconst apiDeployment = new aws.apigateway.Deployment("openAPI Deployment", {
restApi: openApiId,
triggers: {
redeployment: pulumi.all([resource.id, methodGet.id, integration.id]).apply(([resourceId, methodGetId, integrationId]) => JSON.stringify([
resourceId,
methodGetId,
integrationId,
])).apply(toJSON => crypto.createHash('sha1').update(toJSON).digest('hex')),
},
});
const deploymentStage = new aws.apigateway.Stage("stage", {
deployment: openApiDeploymentId,
restApi: openApiId,
stageName: env,
});
swift-fireman-31153
08/31/2022, 9:15 PMDiagnostics:
pulumi:pulumi:Stack (open-api-services-dev):
error: update failed
aws:apigateway:Stage (stage):
error: 1 error occurred:
* error creating API Gateway Stage (dev): ConflictException: Stage already exists
swift-fireman-31153
08/31/2022, 9:16 PMstage_name
on Deployment
is not advised as it causes an interruption in servicelittle-soccer-5693
08/31/2022, 9:39 PMsteep-lamp-20408
09/01/2022, 9:20 AMaws.lambda_.Function
to create a AWS lambda with Python (the lambda running under FastAPI+Mangum), but I’m a bit confused on how to pack the python dependencies for the lambda.
So far I’ve been doing the following:
import pulumi
import pulumi_aws as aws
my_lambda_lambda_func = aws.lambda_.Function(
"my-lambda-name",
name="my-lambda-name",
role=iam_role_for_lambda.arn,
runtime="python3.9",
handler="main.handler",
package_type="zip",
code=pulumi.AssetArchive(
{".": pulumi.FileArchive("./lambdas/my_lambda")}
),
environment=aws.lambda_.FunctionEnvironmentArgs(
variables={
"VAR": "my-env-var",
},
),
)
The lamba iteself is organized that way:
/lambdas/my_lambda/
main.py
pyproject.toml
requirements.txt
How can I pack the Python dependencies? I would prefer using a zip instead of a Docker container as it is part of a big Pulumi stack.
If I esport the dependencies toi a /lib
folder with something like pip install -t lib -r requirements.txt
, will it be taken into account? Should they be exported at the root of the lambda instead? Can we pass the libs/dependencies folder path to the Pulumi lambda_.Function
object?little-soccer-5693
09/01/2022, 10:16 PMDiagnostics:
aws:ecr:Repository (<reponame>):
error: deleting urn:pulumi:dev::<reponame>::aws:ecr/repository:Repository::<reponame>: 1 error occurred:
* ECR Repository (<reponame>-b14d0f6) not empty, consider using force_delete: RepositoryNotEmptyException: The repository with name '<reponame>-b14d0f6' in registry with id '<accountId>' cannot be deleted because it still contains images
however, I am setting the force delete flag:
repoArgs := &ecr.RepositoryArgs{
ForceDelete: pulumi.Bool(true),
}
repo, err := ecr.NewRepository(ctx, repoName, repoArgs)
if err != nil {
return err
}
is this a bug or am I setting the wrong attribute?boundless-farmer-38967
09/02/2022, 11:15 AM//lambda role
const lambdaHandlerRole = new paws.iam.Role(`${projectToken}-data-topic-lambda-role`, {
assumeRolePolicy: {
Version: "2012-10-17",
Statement: [{
Action: "sts:AssumeRole",
Principal: {
Service: "<http://lambda.amazonaws.com|lambda.amazonaws.com>",
},
Effect: "Allow",
Sid: "",
}],
},
});
new paws.iam.RolePolicyAttachment(`${projectToken}-role-attach`, {
role: lambdaHandlerRole,
policyArn: paws.iam.ManagedPolicies.AWSLambdaExecute,
});
//SNS topic
const topic = new paws.sns.Topic(`${projectToken}-data-topic`);
//Lambda - code loaded from a sub-dir
const badgerFunc = new paws.lambda.Function(`${projectToken}-data-sender-badger`, {
code: new pulumi.asset.AssetArchive({
".": new pulumi.asset.FileArchive("./lambda/badger"),
}),
runtime: "nodejs16.x",
handler: "index.handler",
role: lambdaHandlerRole.arn,
});
//Subscribe lambda to SNS
new paws.sns.TopicSubscription(`${projectToken}-badger`, {
topic: topic.arn,
protocol: "lambda",
endpoint: badgerFunc.arn,
});
What I already checked:
1. Run lambda to ensure it's properly set up
2. Manually subscribe the deployed lambda and confirm it triggers on new message
3. Subscribed my email to the same topic to ensure it indeed publishes messages
It has to be something in the above set up, but there's no document anywhere with a complete example for subscribing a lambda to a topic.
Thanks!sparse-intern-71089
09/02/2022, 12:40 PMvictorious-dusk-75271
09/02/2022, 10:02 PM-- kubernetes:apps/v1:Deployment allrites-frontend deleting original error: unknown
-- kubernetes:apps/v1:Deployment allrites-frontend **deleting failed** error: unknown
@ Updating....
aws:route53:Record ssl-cert-validation-dns-record
pulumi:pulumi:Stack nuxt-application-primary-staging running error: update failed
pulumi:pulumi:Stack nuxt-application-primary-staging **failed** 1 error
Diagnostics:
kubernetes:apps/v1:Deployment (allrites-frontend):
error: unknown
pulumi:pulumi:Stack (nuxt-application-primary-staging):
error: update failed
Any idea what this unknown errors mean?victorious-dusk-75271
09/02/2022, 10:03 PMvictorious-dusk-75271
09/03/2022, 1:46 PM@pulumi/eks
, even pulumi creating resources such as ec2 LaunchConfiguration and stuff. its not showing up on the cluster's node group
new eks.NodeGroup(`${name}-ng-1`, {
cluster: cluster,
labels: { preemptible: 'true' },
instanceType: "t2.medium",
desiredCapacity: 2,
minSize: 2,
maxSize: 5,
instanceProfile: instanceProfile,
nodeSubnetIds: args.privateSubnetIds,
nodeRootVolumeSize: 30,
autoScalingGroupTags: this.cluster.core.cluster.name.apply(clusterName => ({
"<http://k8s.io/cluster-autoscaler/enabled|k8s.io/cluster-autoscaler/enabled>": "true",
[`<http://k8s.io/cluster-autoscaler/${clusterName}`|k8s.io/cluster-autoscaler/${clusterName}`>]: "true",
})),
}, { provider: opts?.provider, parent: cluster })
I dont know how to debug this issuevictorious-dusk-75271
09/04/2022, 12:11 AMfreezing-artist-36980
09/04/2022, 12:48 PMDestroying (staging):
Type Name Status Info
*`pulumipulumiStack myapp-infra-staging failed 1 error`*
*`- └─ awsrdsSubnetGroup rds-subnet-group deleting failed 1 error`*
Diagnostics:
aws:rds:SubnetGroup (rds-subnet-group):
error: deleting urn:pulumi:staging::myapp-infra::aws:rds/subnetGroup:SubnetGroup::rds-subnet-group: 1 error occurred:
` deleting RDS Subnet Group (rds-subnet-group-84e29cf): InvalidDBSubnetGroupStateFault: Cannot delete the subnet group 'rds-subnet-group-84e29cf' because at least one database instance: myapp-staging is still using it.`*
status code: 400, request id: 2d691633-f9eb-4bf9-969c-f4db4ac4ee89
pulumi:pulumi:Stack (myapp-infra-staging):
error: update failed
victorious-dusk-75271
09/04/2022, 6:13 PMcurved-appointment-51749
09/05/2022, 11:08 AMvictorious-dusk-75271
09/05/2022, 5:44 PMvictorious-dusk-75271
09/05/2022, 6:47 PMhttps://puu.sh/JjR0X/d79d8e5cef.png▾
victorious-dusk-75271
09/05/2022, 6:48 PMcool-glass-63014
09/06/2022, 9:27 AMvictorious-dusk-75271
09/06/2022, 11:56 AMcreamy-pharmacist-70032
09/06/2022, 4:31 PMeip
does not give any hints about that being supported (https://www.pulumi.com/registry/packages/aws/api-docs/ec2/eip/), any ideas?strong-helmet-83704
09/06/2022, 5:17 PMtry:
vgw = aws.ec2.get_vpn_gateway(
filters=[aws.ec2.GetVpnGatewayFilterArgs(
name="tag:Name",
values=[f"Vgw"],
)], opts=pulumi.ResourceOptions(provider=provider_options)
)
vgw = aws.ec2.VpnGateway(f"Vgw",
tags={"Name": f"Vgw"},
opts=pulumi.ResourceOptions(
provider=provider_options,
retain_on_delete=True,
import_=vgw.id
)
)
except:
vgw = aws.ec2.VpnGateway(f"Vgw",
tags={"Name": f"Vgw"},
opts=pulumi.ResourceOptions(
provider=provider_options,
retain_on_delete=True
)
)
Is this is the best way to achieve this? It works but it doesn't seem particularly elegant for such a common / simple task.victorious-dusk-75271
09/07/2022, 6:23 PMResources:
~ 6 to update
+-48 to replace
54 changes. 356 unchanged
Do you want to perform this update? details
pulumi:pulumi:Stack: (same)
[urn=urn:pulumi:production::allrites-infrastructure::pulumi:pulumi:Stack::allrites-infrastructure-production]
~ eks:index:VpcCni: (update)
[id=cb5e864952980aec]
[urn=urn:pulumi:production::allrites-infrastructure::custom:resource:eks$eks:index:Cluster$eks:index:VpcCni::us-eks-eks-cluster-vpc-cni]
+ kubeconfig: output<string>
++pulumi:providers:kubernetes: (create-replacement)
[id=f5843109-d992-4709-afac-e4be7342d1ea]
[urn=urn:pulumi:production::allrites-infrastructure::pulumi:providers:kubernetes::us-eks-k8s-provider]
- kubeconfig: {
- apiVersion : "v1"
- clusters : [
- [0]: {
...................
}
+ kubeconfig: output<string>
this constant changes to kubeconfig is very painfulvictorious-dusk-75271
09/07/2022, 6:23 PM@pulumi/eks
on every up
command 😞