https://pulumi.com logo
Join the conversationJoin Slack
Channels
announcements
automation-api
aws
azure
blog-posts
built-with-pulumi
cloudengineering
cloudengineering-support
content-share
contribex
contribute
docs
dotnet
finops
general
getting-started
gitlab
golang
google-cloud
hackathon-03-19-2020
hacktoberfest
install
java
jobs
kubernetes
learn-pulumi-events
linen
localstack
multi-language-hackathon
office-hours
oracle-cloud-infrastructure
plugin-framework
pulumi-cdk
pulumi-crosscode
pulumi-deployments
pulumi-kubernetes-operator
pulumi-service
pulumiverse
python
registry
status
testingtesting123
testingtesting321
typescript
welcome
workshops
yaml
Powered by Linen
google-cloud
  • a

    ancient-rose-25146

    05/11/2022, 12:50 PM
    Hi, I have a use case which requires different serviceaccounts for different environments and I need to apply permissions to those service accounts. The issue is that when I apply a binding, it overwrites the bindings for the other environment. Is there a way to only add without overwriting current bindings? For example this is how I am currently doing it.
    const externalDnsGCPServiceAccount = new gcpNative.iam.v1.ServiceAccount(
      "external-dns-gcp-sa",
      {
        accountId: `external-dns-${environment}`,
      }
    );
    
    new gcp.projects.IAMBinding("external-dns-dns-admin-rb", {
      project: project,
      role: "roles/dns.admin",
      members: [
        externalDnsGCPServiceAccount.email.apply((s) => `serviceAccount:${s}`),
      ],
    });
    p
    • 2
    • 8
  • g

    gifted-cat-49297

    05/11/2022, 2:55 PM
    Maybe someone have e2e example of setting up Api Gateway? (API+Gateway+Config+etc)
    • 1
    • 1
  • q

    quick-wolf-8403

    05/12/2022, 12:54 AM
    Hi folks! I have set up a Cloud Run service following the examples here: https://www.pulumi.com/registry/packages/gcp/api-docs/cloudrun/service/ I would like it to update (redeploy) on a
    pulumi up
    if the docker image has changed. Do I need to change the value of the
    image
    string to trigger this? Or will it change if the tag is pointing to a new image? Or do I need to extract the SHA and pass that in?
    g
    • 2
    • 5
  • f

    future-window-78560

    05/15/2022, 4:10 AM
    Hey Team! How can we create a GCP project through pulumi with the same PROJECT_ID on GCP different accounts? It is really important for me to know this since the resources I am creating thru pulumi IAC are utilized in the CICD pipeline, therefore I really need to have fixed Project_ID to avoid any manual changes and smooth CICD deployment.
  • k

    kind-island-70054

    05/16/2022, 2:35 PM
    Hello ! I’m trying to deploy the firestore rules file I have through Pulumi. I use
    new gcp.firebaserules.Ruleset(
        "firestore-rules",
        {
          project: gcp.config.project,
          source: {
            files: [
              {
                content: fs
                  .readFileSync(path.resolve(__dirname, "../../firestore.rules"))
                  .toString(),
                name: "firestore.rules",
              },
            ],
          },
        },
        { dependsOn: services }
      );
    I have enabled the firebaserules service this way:
    new gcp.projects.Service("firebaserules", {
      service: "<http://firebaserules.googleapis.com|firebaserules.googleapis.com>",
    });
    But I receive a SERVICE_DISABLED error when I run pulumi up:
    [
          {
            "@type": "<http://type.googleapis.com/google.rpc.ErrorInfo|type.googleapis.com/google.rpc.ErrorInfo>",
            "domain": "<http://googleapis.com|googleapis.com>",
            "metadata": {
              "consumer": "projects/764086053860",
              "service": "<http://firebaserules.googleapis.com|firebaserules.googleapis.com>"
            },
            "reason": "SERVICE_DISABLED"
          }
        ]
    That project number is not mine weirdly… It also gives me this error message but I don’t think that it’s related to my problem, is it?
    Error creating Ruleset: googleapi: Error 403: Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the <http://firebaserules.googleapis.com|firebaserules.googleapis.com>. We recommend configuring the billing/quota_project setting in gcloud or using a service account through the auth/impersonate_service_account setting. For more information about service accounts and how to use them in your application, see <https://cloud.google.com/docs/authentication/>. If you are getting this error with curl or similar tools, you may need to specify 'X-Goog-User-Project' HTTP header for quota and billing purposes. For more information regarding 'X-Goog-User-Project' header, please check <https://cloud.google.com/apis/docs/system-parameters>.
    Is there an additional service to enable that I don’t know about maybe? Has anybody encountered a similar error?
    w
    • 2
    • 3
  • h

    high-church-15413

    05/16/2022, 5:10 PM
    hello: How do we enable dataplane in GKE autopilot? I am using gcp classic provider but did not see an config to do that.
    n
    • 2
    • 2
  • f

    future-window-78560

    05/16/2022, 5:43 PM
    Hi there How can we download GCP service account key on creation thru pulumi??
    g
    • 2
    • 1
  • w

    wet-soccer-72485

    05/18/2022, 8:23 PM
    Has anyone seen
    UptimeCheckConfig
    be replaced each Pulumi preview and update, regardless of if there are changes?
    p
    • 2
    • 4
  • f

    future-window-78560

    05/19/2022, 10:29 AM
    Team, Any guide on how to choose between Native mode and Datastore mode GCP using pulumi I am creating datastore resources through pulumi which has to be fully automated. Datastore index is created when I am manually enabling mode and creating datastore-database from GCP console. How can I avoid this manual step?
  • c

    clever-king-43153

    05/19/2022, 10:04 PM
    Hi everyone! Looking to find out how to provision a GKE cluster with spot instances. I've looked through the docs but can't seem to find it in the api. I've noticed it is documented in the AWS sdk so just want to know if it's possible for gcp as well. Thanks!
    h
    • 2
    • 2
  • a

    ambitious-school-26690

    05/25/2022, 9:34 AM
    Hey, I’m having trouble dealing with dependencies on fetching resources. for example getting a secret
    gcp.secretmanager.getSecretVersion
    will fail when the service
    <http://secretmanager.googleapis.com|secretmanager.googleapis.com>
    is not enabled yet. I can managed dependencies for resources but not for data sources. How do I handle conditionally fetching the Secret only when the API is enabled
    q
    • 2
    • 1
  • m

    modern-thailand-30846

    05/26/2022, 10:11 PM
    Hi everyone! I’m working on creating a GKE cluster on GCP, trying to be as granular as possible. However, I was only able to specify node count, machine type and master version. Is there a way to be more granular, such as specify things like max pods per pool, node pool CIDR range or cluster being zonal or regional? Thanks!!
    h
    • 2
    • 2
  • k

    kind-keyboard-17263

    05/27/2022, 2:30 PM
    Hi ! I have to create a private connection (to then deploy a cloudsql instance).
    vpc = gcp.compute.Network(
        "default",
        name="default",
        project=project,
        auto_create_subnetworks=False,
        routing_mode="GLOBAL")
    
    ipv4_address = gcp.compute.GlobalAddress(
      "ipv4-address",
      address="192.168.3.1",
      description="IP address range to be used for private connection",
      network=vpc.id,
      project=project,
      address_type="INTERNAL",
      purpose="PRIVATE_SERVICE_CONNECT",  # Correct ?
    )
    
    private_vpc_peering = gcp.servicenetworking.Connection(
      "private-vpc-peering",
      network="default",
      service="<http://servicenetworking.googleapis.com|servicenetworking.googleapis.com>",
      reserved_peering_ranges=[ipv4_address.name]
    )
    When I execute this, I see this very cryptic error:
    * Failed to find Service Networking Connection, err: Failed to retrieve network field value, err: project: required field is not set
    I don't really understand the meaning of the error 😅 ! Thanks for help
    o
    • 2
    • 14
  • t

    thousands-jelly-11747

    05/28/2022, 12:06 AM
    Hi guys, sorry for the question but I changed teams and I have problems with pulumi. can i update this?
  • t

    thousands-jelly-11747

    05/28/2022, 12:07 AM
    pulumi😛roviders:gcp default_3_25_0
  • t

    thousands-jelly-11747

    05/28/2022, 12:08 AM
    I would like to update that version of the provider but I don't know how, I update the requirements.txt but it gives me this error
  • t

    thousands-jelly-11747

    05/28/2022, 12:10 AM
    error: could not load plugin for gcp provider 'urn😛ulumi:sandbox::lapp-namel:😛ulumi:providers:gcp::default_3_25_0': no resource plugin 'pulumi-resource-gcp' found in the workspace at version v3.25.0 or on your $PATH, install the plugin using
    pulumi plugin install resource gcp v3.25.0
    g
    • 2
    • 1
  • o

    orange-crowd-9665

    06/01/2022, 9:34 AM
    Hello! On GCP, I'm creating PubSub Schemas (w/ gRPC), and then after PubSub topics with those schemas (w/ ResourceOptions depends_on parameter). A problem arises when I update my .proto files. I don't know the inners of Pulumi, but everytime I update a schema, the PubSub topics that rely on it will result in a "Deleted Schema" state. What are the best practices to update PubSub schemas?
    g
    • 2
    • 1
  • q

    quick-wolf-8403

    06/02/2022, 3:45 PM
    Here's my question--in Pulumi, can I get a list of all services running in a project? My idea is that I want to have the "main" service running plus semi-short-lived services automatically created when someone creates a PR. I have it creating a new service by simply appending the PR number (passed in from github) to the service name. That works for one PR. But I'd like to use Pulumi to keep track of all current open PRs, and tear down services when the PRs close. (Easy enough to do with
    gcloud
    , but...)
    a
    g
    • 3
    • 7
  • b

    broad-parrot-2692

    06/04/2022, 4:13 AM
    Curious if anyone is using GCP's workload identity federation to authenticate their pulumi service account into CI
  • p

    prehistoric-activity-61023

    06/05/2022, 9:15 PM
    I had to restore the disk from the snapshot and after
    pulumi refresh
    , I got a lot of complaints. The new disk created from the snapshot, even though it had the same name, differs from the original one because of
    snapshot
    and
    image
    fields values. My question is: is it a legit scenario in GCP where I should use
    ignore_changes
    on Disk resource?
    • 1
    • 1
  • b

    broad-parrot-2692

    06/05/2022, 11:55 PM
    Man I am much happier working in pulumi than terraform directly for RBAC related stuff
    💜 2
    😄 2
  • b

    breezy-lifeguard-15721

    06/07/2022, 12:30 AM
    Hey all, trying to use the flex template resource, https://www.pulumi.com/registry/packages/gcp/api-docs/dataflow/flextemplatejob/. Pulumi creates the resource just fine. It also updates the job fine the first time but the section time it gives an error:
    has terminated with state "JOB_STATE_UPDATED"
    . Dataflow will create a new jobId, seems like pulumi state keeps the old job. From what I see the provider has fixed this but still getting the above result. https://github.com/pulumi/terraform-provider-google-beta/blob/18e8f0589864f98ea7bc[…]015f6935eb64/google-beta/resource_dataflow_flex_template_job.go
    g
    • 2
    • 5
  • k

    kind-keyboard-17263

    06/07/2022, 8:21 AM
    Hi folks ! I have a shell script which triggers a pulumi execution provisioning some input data (like the stack name, and a few other things). The pulumi script itself creates the environment and spins up a gke cluster on gcp. Now, in the pulumi context some data need to be dynamic, like the `ip_cidr_range`of the subnet I need (which is now statically provided). Is there a programmatic way to get a valid address range selected or should I make a script which tries one until it’s successfully created ?
    g
    • 2
    • 2
  • q

    quick-wolf-8403

    06/08/2022, 5:13 PM
    Hello! I'm moving our Cloud Run resources to pulumi from being deployed with Cloud Build.
    pulumi update
    keeps getting hung up on pending operations. Do I need to remove the existing services and let Pulumi bring them up, start clean?
    a
    • 2
    • 6
  • a

    ancient-rose-25146

    06/13/2022, 7:45 PM
    Hi, I am trying to add another node pool to an existing GKE cluster but when I run pulumi, it gives me the error
    error sending request: googleapi: Error 400: Must provide an update.
    Here is the change: Initial:
    const cluster = new gcpNative.container.v1.Cluster(clusterNameUS, {
      name: clusterNameUS + `-${environment}`,
      project: project,
      location: "us-west2-b",
      releaseChannel: {
        channel: "REGULAR",
      },
      initialClusterVersion: "1.21.9-gke.1002",
      workloadIdentityConfig: {
        workloadPool: workloadPool,
      },
      networkConfig: {},
      ipAllocationPolicy: {
        useIpAliases: true,
      },
      nodePools: [
        {
          config: nodeConfig,
          initialNodeCount: 3,
          name: `${environment}-us`,
          autoscaling: {
            enabled: true,
            maxNodeCount: 5,
            minNodeCount: 3,
          },
        },
        {
          config: gpuNodeConfig,
          initialNodeCount: 1,
          name: `${environment}-us-gpu`,
          autoscaling: {
            enabled: true,
            maxNodeCount: 5,
            minNodeCount: 1,
          },
        },
      ],
    });
    updated:
    const cluster = new gcpNative.container.v1.Cluster(clusterNameUS, {
      name: clusterNameUS + `-${environment}`,
      project: project,
      location: "us-west2-b",
      releaseChannel: {
        channel: "REGULAR",
      },
      initialClusterVersion: "1.21.9-gke.1002",
      workloadIdentityConfig: {
        workloadPool: workloadPool,
      },
      networkConfig: {},
      ipAllocationPolicy: {
        useIpAliases: true,
      },
      nodePools: [
        {
          config: nodeConfig,
          initialNodeCount: 3,
          name: `${environment}-us`,
          autoscaling: {
            enabled: true,
            maxNodeCount: 5,
            minNodeCount: 3,
          },
        },
        {
          config: gpuNodeConfig,
          initialNodeCount: 1,
          name: `${environment}-us-gpu`,
          autoscaling: {
            enabled: true,
            maxNodeCount: 5,
            minNodeCount: 1,
          },
        },
        {
          config: a100NodeConfig,
          initialNodeCount: 1,
          name: `${environment}-us-a100`,
          autoscaling: {
            enabled: true,
            maxNodeCount: 5,
            minNodeCount: 1,
          },
        },
      ],
    });
    The only difference is the addition of the a100 pool.
    g
    • 2
    • 7
  • c

    cold-carpenter-61763

    06/14/2022, 8:27 PM
    Hi! I'm trying to create an eventarc trigger based on a filename match in google cloud storage. (
    --event-filters-path-pattern
    in the gcp documentation). I don't understand how to specify that in Pulumi though. The docs here mention operator might be
    match-path-pattern
    , but I don't know what to put for
    attribute
    and
    value
    . E.g. here's my not-working config:
    matchingCriterias: [
                    {
                        attribute: "type",
                        value: "google.cloud.storage.object.v1.finalized",
                    },
                    {
                        attribute: "bucket",
                        value: bucketName
                    },
                    {
                        attribute: "resourceName",
                        value: "/**/metadata.yaml",
                        operator: "match-path-pattern"
    
                    }
                ],
    g
    • 2
    • 9
  • m

    melodic-greece-12878

    06/15/2022, 8:17 AM
    Hi all. Is there a (flag to enable a) smart deployment pattern in Pulumi to allow changing subnet/ranges without manually destroying instances on that subnet? If I now decide to change a subnet, it tries to create a new subnet, but the creation is blocked as is might be "in use" (e.g. I might have instances on that subnet and during provisioning I get GCP API errors, which make sense). Just thinking out loud, the manual way would be to create the new subnet, then "migrate" all existing VMss to use the new subnet and then remove tthe old one afterwards. It's not working -out of the box- (not necessarily expecting that :)) .. just wondering if there's a way to handle that in Pulumi? With ResourceOptions you can trigger a replacement based on local properties, and adjust behaviour f.i. delete_before_replace, but I don't see how to to that in conjunction with other resources.
  • a

    ancient-rose-25146

    06/16/2022, 7:06 PM
    Going off my last issue with being unable to update an existing cluster's nodepools from within pulumi, is it possible to update something within the console/cli then import that change to pulumi? As it stands the only solution to continue using pulumi is to completely delete the current cluster and recreate it with the new nodepool configuration, which is not something that I can do. Any help would be appreciated.
    b
    n
    • 3
    • 9
  • b

    blue-leather-96987

    06/18/2022, 10:18 PM
    Has anyoen faced the following issue when dealing with bigquery?
    bigquery.DatasetAccessArray does not implement bigquery.DatasetAccessTypeArrayInput (missing method ToDatasetAccessTypeArrayOutput)
    • 1
    • 1
Powered by Linen
Title
b

blue-leather-96987

06/18/2022, 10:18 PM
Has anyoen faced the following issue when dealing with bigquery?
bigquery.DatasetAccessArray does not implement bigquery.DatasetAccessTypeArrayInput (missing method ToDatasetAccessTypeArrayOutput)
It looks like it should be a bigquery.DatasetAccessTypeArray type, but the pulumi import was generating the bigquery.DatasetAccessArray instead, and it's what's in the docs as well. Has anyone seen this before?
View count: 1