dry-engine-17210
01/22/2021, 10:11 PMbored-car-38257
02/12/2021, 5:42 AMbored-car-38257
02/12/2021, 7:01 AMstocky-window-81967
02/17/2021, 2:03 AMfierce-area-75437
02/22/2021, 3:25 PMgcp:project: The Google Cloud project to deploy into:
Should that project name match my Google Cloud project name, or the GCP project ID, or something else entirely?red-area-47037
02/23/2021, 9:34 AMzones/zone/machineTypes/custom-CPUS-MEMORY
For example: zones/us-central1-f/machineTypes/custom-4-5120 For a full list of restrictions, read the Specifications for custom machine types.
limited-rainbow-51650
03/01/2021, 8:52 AMpulumi-up
is not in the same VPC as the DB setup, so we want to integrate cloud_sql_proxy
in our code setup. Starting the proxy via NodeJS child_process
works, but we are searching the correct place in our code to stop the proxy after the mysql.Grant
resources have been created/updated:
const ddl = new mysql.Grant(
config.dbDDLUsername,
{
user: DDLUser.name,
database: database.name,
privileges: ["CREATE", "ALTER", "DROP"],
host: "%",
},
{
provider: mysqlProvider
}
);
const dml = new mysql.Grant(
config.dbDMLUsername,
{
user: DMLUser.name,
database: database.name,
privileges: ["UPDATE", "INSERT", "SELECT", "INDEX", "DELETE"],
host: "%",
},
{
provider: mysqlProvider
}
);
pulumi.all([ddl.id, dml.id]).apply(async () => {
console.log(`>>>>> Killing cloud_sql_proxy... <<<<<`);
sqlProxyProcess.kill();
});
Pulumi is hanging now and is not killing the child process. Any ideas?limited-rainbow-51650
03/01/2021, 9:19 AMdestroy
, our proxy isn’t even started, so the deletion of the mysql.Grants
doesn’t happen.handsome-accountant-55124
03/03/2021, 8:04 AMpushEndpoint: myCloudRunService.status.url
But now there is a 'statuses' property instead. I've tried something like below but that doesn't work 😔
pushEndpoint: myCloudRunService.statuses.get().pop()!.url
or
pushEndpoint: myCloudRunService.statuses.apply(
statuses => {
let hostname = statuses.pop()!.url;
return hostname;
})
Super grateful for any help on this one.adorable-action-51248
03/05/2021, 3:59 PMconst serviceDirectoryNamespace = new gcp.servicedirectory.Namespace('ns', {
namespaceId: "nsid",
location: 'europe-west3',
project,
});
const dnsManagedZone = new gcp.dns.ManagedZone('zone',{
dnsName: 'fancy.local.',
project,
visibility: "private",
serviceDirectoryConfig: {
namespace: {
namespaceUrl: serviceDirectoryNamespace.selfLink,
},
}
});
i get this error message:
gcp:dns/managedZone:ManagedZone resource 'zone' has a problem: "service_directory_config.0.namespace.0.namespace_url": required field is not set
if i change serviceDirectoryConfig
and wrap the contents in arrays, I get error messages like this: [...] has a problem: service_directory_config.0.namespace.0: expected object, got slice
the code i have looks pretty much like the code here: https://github.com/pulumi/pulumi-gcp/blob/master/sdk/nodejs/dns/managedZone.ts#L134
does anybody have an idea why this is not working ?wet-soccer-72485
03/08/2021, 8:06 PMlimited-planet-95090
03/08/2021, 11:52 PMadorable-action-51248
03/16/2021, 3:28 PMBackendConfig
(apiVersion: <http://cloud.google.com/v1|cloud.google.com/v1>
, also see https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health ) with pulumi ? would k8s.yaml.ConfigFile
work ?limited-rainbow-51650
03/16/2021, 4:45 PMproject
is optional, we get a stack trace that project
is not defined when running.limited-rainbow-51650
03/16/2021, 4:47 PMOutput<string>
coming from a StackReference. How can we pass the real string as the value to getGlobalAddress
? (aka Output to string)incalculable-animal-125
03/16/2021, 8:59 PMgcloud container clusters create CLUSTER_NAME \
--workload-pool=PROJECT_ID.svc.id.goog
incalculable-animal-125
03/16/2021, 9:03 PMincalculable-animal-125
03/16/2021, 9:25 PMmodern-napkin-96707
03/17/2021, 3:05 PMpython my-dataflow-script.py --template_location '<gs://my-template-bucket/my-dataflow-template>'
or I can hardcode the template_location
in the script itself and just run python my-dataflow-script.py
which then packages the beam application as a template to be run in dataflow.
I’ve tried calling the dataflow script from another python script using exec(open('my-dataflow-script.py').read())
which works, but trying that in pulumi’s __main__.py
fails with:
TypeError: cannot pickle 'TaskStepMethWrapper' object
I guess apache_beam
tries to pickle the whole pulumi program or do something else which probably doesn’t make sense.
Any experience on dataflow + pulumi and getting this working?plain-potato-84679
03/19/2021, 7:52 AMdocker pull hasura/graphql-engine
docker tag <http://docker.io/hasura/graphql-engine:latest|docker.io/hasura/graphql-engine:latest> <http://gcr.io/gcp-project/hasura|gcr.io/gcp-project/hasura>
docker push <http://gcr.io/gcp-project/hasura|gcr.io/gcp-project/hasura>
• How to insert basic data in a Postgres (Cloud SQL) database that was setup via pulumi.
CREATE OR REPLACE function...
Really appreciate your help!adorable-action-51248
03/22/2021, 2:20 PMSHARED_LOADBALANCER_VIP
? I am using it like this:
new gcp.compute.Address("myip", {
project,
addressType: 'INTERNAL',
subnetwork: subnet.selfLink,
labels,
region:'europe-west1',
purpose: 'SHARED_LOADBALANCER_VIP'
});
but which fails with the error : has a problem: expected purpose to be one of [GCE_ENDPOINT ], got SHARED_LOADBALANCER_VIP
bored-car-38257
03/29/2021, 3:36 AMpulumi
. for example if i want to make the service account storage.Admin
using serviceaccount.NewIAMMember
. roles are given as below
• roles/storage.admin
• projects/<projectName>/roles/storage.admin
Both threw error 400
does not exist in the resource's hierarchy., badRequest
boundless-artist-3489
04/02/2021, 7:54 AMconfig:
gcp-py-network-component:subnet_cidr_blocks: '172.2.0.0/16'
Would you have an idea ?
Thanks in advance for your answerbored-car-38257
04/13/2021, 2:39 PMgcp.projects.IAM*
& gcp.serviceAccount.IAM*
. How and when to use these two ?flaky-evening-60547
04/16/2021, 9:34 PMprehistoric-nail-50687
04/20/2021, 7:48 AMproud-pizza-80589
04/20/2021, 8:09 PMconst engineVersion = gcp.container.getEngineVersions().then((v) => v.latestMasterVersion);
and then
minMasterVersion: engineVersion,
nodeVersion: engineVersion,
but the order of upgrading seems wrong as it starts to complain now about
* googleapi: Error 400: Node version "1.19.8-gke.1600" must not have a greater minor version than master version "1.18.16-gke.502"., badRequest
119
How should I manage that?bored-car-38257
04/22/2021, 4:01 AMauto-pilot
GKE clusters using Pulumi
?bored-car-38257
04/27/2021, 3:23 PMGKE
- auto-pilot.
- i created a autopilot cluster manually - then imported it with pulumi import
command - got the below suggested code .
package main
import (
"<http://github.com/pulumi/pulumi-gcp/sdk/v5/go/gcp/container|github.com/pulumi/pulumi-gcp/sdk/v5/go/gcp/container>"
"<http://github.com/pulumi/pulumi/sdk/v3/go/pulumi|github.com/pulumi/pulumi/sdk/v3/go/pulumi>"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
_, err := container.NewCluster(ctx, "name", &container.ClusterArgs{
EnableAutopilot: pulumi.Bool(true),
EnableBinaryAuthorization: pulumi.Bool(false),
EnableKubernetesAlpha: pulumi.Bool(false),
EnableL4IlbSubsetting: pulumi.Bool(false),
EnableLegacyAbac: pulumi.Bool(false),
EnableTpu: pulumi.Bool(false),
Name: pulumi.String("cluster-name"),
Network: pulumi.String("default"),
VerticalPodAutoscaling: &container.ClusterVerticalPodAutoscalingArgs{
Enabled: pulumi.Bool(true),
},
}, pulumi.Protect(true))
if err != nil {
return err
}
return nil
})
}
But when i tried using the above code as sample to create another auto-pilot
cluster . Got below error
error: gcp:container/cluster:Cluster resource 'name' has a problem: ConflictsWith: "enable_binary_authorization": conflicts with enable_autopilot. Examine values at 'Cluster.EnableBinaryAuthorization'.
boundless-intern-43214
04/29/2021, 10:00 AMconst cluster = new gcp.container.Cluster(
"cluster",
{
[...]
maintenancePolicy: {
recurringWindow: {
startTime: "07:00",
endTime: "15:00",
recurrence: "FREQ=WEEKLY;BYDAY=MO,TU,WE,TH",
},
},
},
);
boundless-intern-43214
04/29/2021, 10:00 AMconst cluster = new gcp.container.Cluster(
"cluster",
{
[...]
maintenancePolicy: {
recurringWindow: {
startTime: "07:00",
endTime: "15:00",
recurrence: "FREQ=WEEKLY;BYDAY=MO,TU,WE,TH",
},
},
},
);