quick-wolf-8403
05/12/2022, 12:54 AMpulumi up
if the docker image has changed. Do I need to change the value of the image
string to trigger this? Or will it change if the tag is pointing to a new image? Or do I need to extract the SHA and pass that in?future-window-78560
05/15/2022, 4:10 AMkind-island-70054
05/16/2022, 2:35 PMnew gcp.firebaserules.Ruleset(
"firestore-rules",
{
project: gcp.config.project,
source: {
files: [
{
content: fs
.readFileSync(path.resolve(__dirname, "../../firestore.rules"))
.toString(),
name: "firestore.rules",
},
],
},
},
{ dependsOn: services }
);
I have enabled the firebaserules service this way:
new gcp.projects.Service("firebaserules", {
service: "<http://firebaserules.googleapis.com|firebaserules.googleapis.com>",
});
But I receive a SERVICE_DISABLED error when I run pulumi up:
[
{
"@type": "<http://type.googleapis.com/google.rpc.ErrorInfo|type.googleapis.com/google.rpc.ErrorInfo>",
"domain": "<http://googleapis.com|googleapis.com>",
"metadata": {
"consumer": "projects/764086053860",
"service": "<http://firebaserules.googleapis.com|firebaserules.googleapis.com>"
},
"reason": "SERVICE_DISABLED"
}
]
That project number is not mine weirdly…
It also gives me this error message but I don’t think that it’s related to my problem, is it?
Error creating Ruleset: googleapi: Error 403: Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the <http://firebaserules.googleapis.com|firebaserules.googleapis.com>. We recommend configuring the billing/quota_project setting in gcloud or using a service account through the auth/impersonate_service_account setting. For more information about service accounts and how to use them in your application, see <https://cloud.google.com/docs/authentication/>. If you are getting this error with curl or similar tools, you may need to specify 'X-Goog-User-Project' HTTP header for quota and billing purposes. For more information regarding 'X-Goog-User-Project' header, please check <https://cloud.google.com/apis/docs/system-parameters>.
Is there an additional service to enable that I don’t know about maybe? Has anybody encountered a similar error?high-church-15413
05/16/2022, 5:10 PMfuture-window-78560
05/16/2022, 5:43 PMwet-soccer-72485
05/18/2022, 8:23 PMUptimeCheckConfig
be replaced each Pulumi preview and update, regardless of if there are changes?future-window-78560
05/19/2022, 10:29 AMclever-king-43153
05/19/2022, 10:04 PMambitious-school-26690
05/25/2022, 9:34 AMgcp.secretmanager.getSecretVersion
will fail when the service <http://secretmanager.googleapis.com|secretmanager.googleapis.com>
is not enabled yet.
I can managed dependencies for resources but not for data sources. How do I handle conditionally fetching the Secret only when the API is enabledmodern-thailand-30846
05/26/2022, 10:11 PMkind-keyboard-17263
05/27/2022, 2:30 PMvpc = gcp.compute.Network(
"default",
name="default",
project=project,
auto_create_subnetworks=False,
routing_mode="GLOBAL")
ipv4_address = gcp.compute.GlobalAddress(
"ipv4-address",
address="192.168.3.1",
description="IP address range to be used for private connection",
network=vpc.id,
project=project,
address_type="INTERNAL",
purpose="PRIVATE_SERVICE_CONNECT", # Correct ?
)
private_vpc_peering = gcp.servicenetworking.Connection(
"private-vpc-peering",
network="default",
service="<http://servicenetworking.googleapis.com|servicenetworking.googleapis.com>",
reserved_peering_ranges=[ipv4_address.name]
)
When I execute this, I see this very cryptic error:
* Failed to find Service Networking Connection, err: Failed to retrieve network field value, err: project: required field is not set
I don't really understand the meaning of the error 😅 ! Thanks for helpthousands-jelly-11747
05/28/2022, 12:06 AMthousands-jelly-11747
05/28/2022, 12:07 AMthousands-jelly-11747
05/28/2022, 12:08 AMthousands-jelly-11747
05/28/2022, 12:10 AMpulumi plugin install resource gcp v3.25.0
orange-crowd-9665
06/01/2022, 9:34 AMquick-wolf-8403
06/02/2022, 3:45 PMgcloud
, but...)broad-parrot-2692
06/04/2022, 4:13 AMprehistoric-activity-61023
06/05/2022, 9:15 PMpulumi refresh
, I got a lot of complaints. The new disk created from the snapshot, even though it had the same name, differs from the original one because of snapshot
and image
fields values. My question is: is it a legit scenario in GCP where I should use ignore_changes
on Disk resource?broad-parrot-2692
06/05/2022, 11:55 PMbreezy-lifeguard-15721
06/07/2022, 12:30 AMhas terminated with state "JOB_STATE_UPDATED"
. Dataflow will create a new jobId, seems like pulumi state keeps the old job.
From what I see the provider has fixed this but still getting the above result.
https://github.com/pulumi/terraform-provider-google-beta/blob/18e8f0589864f98ea7bc[…]015f6935eb64/google-beta/resource_dataflow_flex_template_job.gokind-keyboard-17263
06/07/2022, 8:21 AMquick-wolf-8403
06/08/2022, 5:13 PMpulumi update
keeps getting hung up on pending operations. Do I need to remove the existing services and let Pulumi bring them up, start clean?ancient-rose-25146
06/13/2022, 7:45 PMerror sending request: googleapi: Error 400: Must provide an update.
Here is the change:
Initial:
const cluster = new gcpNative.container.v1.Cluster(clusterNameUS, {
name: clusterNameUS + `-${environment}`,
project: project,
location: "us-west2-b",
releaseChannel: {
channel: "REGULAR",
},
initialClusterVersion: "1.21.9-gke.1002",
workloadIdentityConfig: {
workloadPool: workloadPool,
},
networkConfig: {},
ipAllocationPolicy: {
useIpAliases: true,
},
nodePools: [
{
config: nodeConfig,
initialNodeCount: 3,
name: `${environment}-us`,
autoscaling: {
enabled: true,
maxNodeCount: 5,
minNodeCount: 3,
},
},
{
config: gpuNodeConfig,
initialNodeCount: 1,
name: `${environment}-us-gpu`,
autoscaling: {
enabled: true,
maxNodeCount: 5,
minNodeCount: 1,
},
},
],
});
updated:
const cluster = new gcpNative.container.v1.Cluster(clusterNameUS, {
name: clusterNameUS + `-${environment}`,
project: project,
location: "us-west2-b",
releaseChannel: {
channel: "REGULAR",
},
initialClusterVersion: "1.21.9-gke.1002",
workloadIdentityConfig: {
workloadPool: workloadPool,
},
networkConfig: {},
ipAllocationPolicy: {
useIpAliases: true,
},
nodePools: [
{
config: nodeConfig,
initialNodeCount: 3,
name: `${environment}-us`,
autoscaling: {
enabled: true,
maxNodeCount: 5,
minNodeCount: 3,
},
},
{
config: gpuNodeConfig,
initialNodeCount: 1,
name: `${environment}-us-gpu`,
autoscaling: {
enabled: true,
maxNodeCount: 5,
minNodeCount: 1,
},
},
{
config: a100NodeConfig,
initialNodeCount: 1,
name: `${environment}-us-a100`,
autoscaling: {
enabled: true,
maxNodeCount: 5,
minNodeCount: 1,
},
},
],
});
The only difference is the addition of the a100 pool.cold-carpenter-61763
06/14/2022, 8:27 PM--event-filters-path-pattern
in the gcp documentation). I don't understand how to specify that in Pulumi though. The docs here mention operator might be match-path-pattern
, but I don't know what to put for attribute
and value
. E.g. here's my not-working config:
matchingCriterias: [
{
attribute: "type",
value: "google.cloud.storage.object.v1.finalized",
},
{
attribute: "bucket",
value: bucketName
},
{
attribute: "resourceName",
value: "/**/metadata.yaml",
operator: "match-path-pattern"
}
],
melodic-greece-12878
06/15/2022, 8:17 AMancient-rose-25146
06/16/2022, 7:06 PMblue-leather-96987
06/18/2022, 10:18 PMbigquery.DatasetAccessArray does not implement bigquery.DatasetAccessTypeArrayInput (missing method ToDatasetAccessTypeArrayOutput)
high-church-15413
06/21/2022, 8:41 PMpulumi import google-native:domains/v1:Registration projects/project-id/location/global/domains/name.com <http://domainname.com|domainname.com>
high-church-15413
06/21/2022, 8:42 PM