colossal-quill-8119
12/29/2023, 6:51 PMgcp:secretmanager:SecretVersion (otp-secret-version):
error: deleting urn:pulumi:dev::trip-service::gcp:secretmanager/secretVersion:SecretVersion::otp-secret-version: 1 error occurred:
* Error when reading or editing SecretVersion: googleapi: Error 401: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See <https://developers.google.com/identity/sign-in/web/devconsole-project>.
Details:
[
{
"@type": "<http://type.googleapis.com/google.rpc.ErrorInfo|type.googleapis.com/google.rpc.ErrorInfo>",
"domain": "<http://googleapis.com|googleapis.com>",
"metadata": {
"email": "<mailto:pulumi@ride-app-dev-2.iam.gserviceaccount.com|pulumi@ride-app-dev-2.iam.gserviceaccount.com>",
"method": "google.cloud.secretmanager.v1.SecretManagerService.DestroySecretVersion",
"service": "<http://secretmanager.googleapis.com|secretmanager.googleapis.com>"
},
"reason": "ACCOUNT_STATE_INVALID"
}
]
I’m using pulumi ESC and OIDCbrief-processor-79219
01/09/2024, 10:49 AMtall-gigabyte-41781
01/15/2024, 9:24 AMupdate
to a gcp:certificateauthority:CaPool
resource, but the update is not going away after applying it. The diff preview does not show any specific changes. Any suggestions on how I could investigate why Pulumi still wants to do an update?stocky-finland-45016
01/15/2024, 10:02 PMstraight-cat-87033
01/17/2024, 11:04 PMserviceAccount
became serviceaccount
) but trying to create an alias to point to the old type has no effect.
We are specifying the account name explicitly in this case (it is important to us that it does not have a suffix) so we cannot allow it to delete and recreate without downtime.
Has there been any resolution on this, or should we continue to wait for the upgrade? I don’t have a super easy way to create a minimal reproduction, but if this is a branch new issue I can attempt it.aloof-leather-66709
01/31/2024, 7:35 AMadorable-activity-71456
02/08/2024, 6:57 PMconst member = new DatasetIamMember(`${datasetId}-dataset-member`, {
datasetId: datasetId,
project: project,
member: `serviceAccount:${pubSubSaEmail}`,
role: "roles/bigquery.dataEditor",
})
This creates the member fine in pulumi (it’s in the resources), but it is not showing up in GCP in the dataset’s share role/principals.
I manually added it (via the GCP console) and successfully imported it as another resource just to see if the code I had was off, but it is the same. Any Ideas?aloof-leather-66267
02/09/2024, 4:26 PMpulumi-gcp
provider? The releases just list commits, but not what the impact is.limited-lighter-76074
02/10/2024, 6:59 PMnetwork = gcp.compute.Network(
"network", project=project_id, auto_create_subnetworks=False
)
# Create a subnet within the VPC in europe-west1 region
subnet = gcp.compute.Subnetwork(
"vpc-private-subnet",
project=project_id,
region=main_settings.GCP_DEFAULT_REGION,
network=network.self_link,
private_ip_google_access=True,
stack_type="IPV4_ONLY",
ip_cidr_range="10.0.0.0/24",
)
vpc_subnet_connector = gcp.vpcaccess.Connector(
"vpc-conn",
# subnet=gcp.vpcaccess.ConnectorSubnetArgs(
# name=subnet.name
# ),
network=network.id,
ip_cidr_range="10.0.0.0/28",
machine_type="e2-micro",
min_instances=2,
max_instances=3,
region=subnet.region
)
This complains that it can't create the connector because:
Invalid IP CIDR range was provided. It conflicts with an existing subnetwork. Please delete the connector manually.
I don't understand, is creating a connector trying to create a new subnetwork?
I am trying to create a subnetwork with private_google_access so my cloudrun services can communicate with one another through internal traffic in the VPC.
I know that an ALB is probably a better option in terms of discovery, but I don't want any load balancing here.
What am I missing?refined-pilot-45584
02/10/2024, 7:15 PMlittle-energy-39499
02/20/2024, 2:58 PMeager-noon-15510
02/22/2024, 9:56 AMimpersonateServiceAccount
option on GCP provider, I'm trying to create a GKE cluster and build a kubeconfig to use later with kubernetes provider, and apparently the masterAuth
has the original user creds, not the service account set in impersonateServiceAccount
. What's the right way to deal with this in pulumi?curved-dream-12503
02/22/2024, 5:19 PMgcp-developers@domain.tld
then I would expect the import command to be something like
pulumi import gcp:cloudidentity/group:Group my-resource-name "gcp-developers@domain.tld"
or
pulumi import gcp:cloudidentity/group:Group my-resource-name "gcp-developers"
Unfortunately this does not seam to work. Any suggestions on how this can be achieved?curved-dream-12503
02/23/2024, 10:48 AMgroups/<ID>
The name can be found with
$ gcloud identity groups describe gcp-developers@domain.tld
dditionalGroupKeys:
- id: ...
createTime: ...
description: ...
displayName: gcp-developers
groupKey:
id: gcp-developers@domain.tld
labels:
<http://cloudidentity.googleapis.com/groups.discussion_forum|cloudidentity.googleapis.com/groups.discussion_forum>: ''
name: groups/<SOME ID> <--------- This is what you need
parent: ...
updateTime: ...
So you need the value in the name:
field. Maybe the docs can be updated to make it more clear.powerful-fish-66594
02/26/2024, 8:31 AMpowerful-fish-66594
02/26/2024, 8:48 AMconst uptimeCheckConfigId = `my_uptime_check_config`;
const uptimeCheckConfig = new gcp.monitoring.UptimeCheckConfig(
uptimeCheckConfigId,
{
timeout: '20s',
period: '60s',
httpCheck: {
path: '/health/status',
port: 80,
},
monitoredResource: {
labels: {
host: ....myUrl,
},
type: 'uptime_url',
},
displayName: `Uptime check`,
},
{ provider },
);
const myAlertPolicy = new gcp.monitoring.AlertPolicy(
`my_uptime_alert_policy`,
{
notificationChannels: ...myChannelIds...,
combiner: 'OR',
conditions: [
{
displayName: `Uptime check FAILED`,
conditionThreshold: {
aggregations: [
{
alignmentPeriod: '420s',
crossSeriesReducer: 'REDUCE_COUNT_FALSE',
groupByFields: [
'resource.label.project_id',
'resource.label.host',
],
perSeriesAligner: 'ALIGN_NEXT_OLDER',
},
],
comparison: 'COMPARISON_GT',
duration: '60s',
filter: `resource.type = "uptime_url" AND metric.type = "<http://monitoring.googleapis.com/uptime_check/check_passed|monitoring.googleapis.com/uptime_check/check_passed>" AND resource.labels.check_id = "${uptimeCheckConfigId}"`,
thresholdValue: 1,
trigger: {
count: 1,
},
},
},
],
displayName: `Alert Policy check`,
},
{ provider },
);
However, I get
Error creating AlertPolicy: googleapi: Error 400: The supplied filter does not specify a valid combination of metric and monitored resource descriptors. The query will not return any time series.
I have a working example in a manually (= via GUI) created GCP project. I pretty much copied the alert policy from there. Only difference: It says *metric*.labels.check_id
there instead of *resource*.labels.check_id
. However, if I do the same in Pulumi, then the Uptime Check and the Alert Policy do not seem to be connected at all. It says "No policy connected". I'm pretty confused by this. Has anyone any clue?refined-pilot-45584
02/26/2024, 10:08 AMeager-wall-56838
02/26/2024, 5:03 PMcolossal-quill-8119
03/03/2024, 11:12 AMpulumi:providers:gcp (default_7_11_2):
error: pulumi:providers:gcp resource 'default_7_11_2' has a problem: could not validate provider configuration: Invalid Attribute Combination. Attribute "credentials" cannot be specified when "access_token" is specified. Check `pulumi config get google-beta:accessToken`.
error: pulumi:providers:gcp resource 'default_7_11_2' has a problem: could not validate provider configuration: Invalid Attribute Combination. Attribute "access_token" cannot be specified when "credentials" is specified. Check `pulumi config get google-beta:credentials`.
best-zebra-72130
03/05/2024, 8:57 AMpulumi import gcp:serviceaccount/account:Account default projects/{{project_id}}/serviceAccounts/{{email}}
Getting this error:
Diagnostics:
pulumi:pulumi:Stack (<project-name>):
error: preview failed
gcp:serviceaccount:Account (<service-account-name>):
error: Preview failed: Resource type 'gcp:serviceaccount/account:Account' not found.
damp-magazine-59707
03/07/2024, 4:57 PMcluster.kubeConfigRaw
, that does not appear in the docs. if you ask the AI to rewrite the program in Python (which we use), it references a kube_config
output that is also not documented, and, afaict, doesn't actually exist:
AttributeError: 'Cluster' object has no attribute 'kube_config'
so it seems the AI is hallucinating. is this possible? can i take outputs from the gcp.container.Cluster
object and directly instantiate a k8s provider?fresh-orange-37907
03/08/2024, 8:20 AMthousands-knife-3009
03/11/2024, 3:49 AMgcp_oslogin_api = pulumi_gcp.projects.Service(
f"oslogin_service_api",
service="oslogin.googleapis.com",
project=gcp_project.project_id,
disable_dependent_services=True,
disable_on_destroy=True,
opts=pulumi.ResourceOptions(
parent=gcp_project,
depends_on=[gcp_resource_manager_api]
)
)
my_name_my_co_ssh_key = pulumi_gcp.oslogin.SshPublicKey(
'my_name_public_key',
key='my-key',
user='my email',
project=gcp_project.project_id,
opts=pulumi.ResourceOptions(
parent=gcp_project,
depends_on=[
gcp_project,
gcp_oslogin_api
],
),
)
Here's the error I'm getting:
gcp:oslogin:SshPublicKey (my_name_public_key):
error: 1 error occurred:
* Error creating SSHPublicKey: googleapi: Error 403: Cloud OS Login API has not been used in project project_id_pulumi before or it is disabled. Enable it by visiting <https://console.developers.google.com/apis/api/oslogin.googleapis.com/overview?project=project_id_pulumi> then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Details:
[
{
"@type": "type.googleapis.com/google.rpc.Help",
"links": [
{
"description": "Google developers console API activation",
"url": "<https://console.developers.google.com/apis/api/oslogin.googleapis.com/overview?project=project_id_pulumi>"
}
]
},
{
"@type": "type.googleapis.com/google.rpc.ErrorInfo",
"domain": "googleapis.com",
"metadata": {
"consumer": "projects/project_id_pulumi",
"service": "oslogin.googleapis.com"
},
"reason": "SERVICE_DISABLED"
}
]
I have confirmed that oslogin is enabled in the project referenced by gcp_project.project_id
Clicking the link for <https://console.developers.google.com/apis/api/oslogin.googleapis.com/overview?project=project_id_pulumi>
takes me to a GCP page to enable the API in the project that is used to house the GCP storage bucket and the GCP service account that has privs to create and manage gcp_project.project_id
thousands-knife-3009
03/11/2024, 7:24 PMbrash-hairdresser-60389
03/11/2024, 10:04 PM...
const name = args.tag
? `sa-${args.name}-${args.tag.substring(0, 5)}`
: `sa-${args.name}`;
this.resourceSAst = new gcp.serviceaccount.Account(name, {
accountId: name,
displayName: `${args.name} ${args.tag} Service Account`,
});
this.resourceSAkey = new gcp.serviceaccount.Key(`sa-key-${args.name}`, {
serviceAccountId: this.resourceSAst.name,
publicKeyType: 'TYPE_X509_PEM_FILE',
});
this.privateKey = this.resourceSAkey.privateKey.apply<string>(
(val) =>
HERE --> JSON.parse(Buffer.from(val, 'base64').toString('utf8')).private_key,
);
...
The value of the application is undefined, with no errors and nothing.
Any idea? any help would be more than appreciatedthousands-knife-3009
03/12/2024, 7:30 PMearly-elephant-96200
03/13/2024, 7:38 PMdef monitoring(projectInfo, monitoringProjectID: str):
gcp.monitoring.MonitoredProject(
resource_name="Monitored Project",
name=projectInfo.project_id.apply(lambda id: f"projects/{monitoringProjectID}/monitoredProjects/{id}"),
metrics_scope=f"projects/{monitoringProjectID}",
)
stocky-finland-45016
03/13/2024, 10:37 PMpulumi preview
on an existing GCP stack that has a gcp:cloudrun/domainMapping:DomainMapping
resource.
Diagnostics:
gcp:cloudrun:DomainMapping (proxy-web-service-domain-mapping):
error: unmarshaling urn:pulumi:prod::thecodinglove-proxy-web::gcp:cloudrun/domainMapping:DomainMapping::proxy-web-service-domain-mapping's instance state: internal: Pulumi property 'terraformLabels' mapped non-uniquely to Terraform attribute 'terraform_labels' (duplicates Pulumi key 'pulumiLabels')
Any ideas what's going on? I'll 🧵 detailsthousands-knife-3009
03/15/2024, 7:26 PMdelightful-monkey-90700
03/15/2024, 10:21 PMdeleteBeforeReplace
option). During one Pulumi run, the VMs were deleted and then another resource failed to be deployed (Cloud Build, build failed -- sometimes common) and during all subsequent runs, Pulumi complains that it cannot replace those Virtual Machines because they do not exist (because it deleted them)
error: deleting urn:pulumi:staging::x::gcp:compute/network:Network$gcp:compute/subnetwork:Subnetwork$gcp:compute/instance:Instance::x: 1 error occurred:
* Error deleting instance: googleapi: Error 404: The resource 'projects/x-c5913e9' was not found, notFound
error: deleting urn:pulumi:staging::keeta-consumer-production-cloud::gcp:compute/network:Network$gcp:compute/subnetwork:Subnetwork$gcp:compute/instance:Instance::y: 1 error occurred:
* Error deleting instance: googleapi: Error 404: The resource 'y-2506283' was not found, notFound
error: deleting urn:pulumi:staging::keeta-consumer-production-cloud::gcp:compute/network:Network$gcp:serviceAccount/account:Account$gcp:compute/instance:Instance::z: 1 error occurred:
* Error deleting instance: googleapi: Error 404: The resource 'z-c86f123' was not found, notFound
It seems like that a pulumi refresh
should help, but it does not:
~ gcp:compute:Instance z refreshing (0s)
...
The VM definitely does not exist within Google Cloud's console either
How can I move forward without directly manipulating the Stack state ?