https://pulumi.com logo
Docs
Join the conversationJoin Slack
Channels
announcements
automation-api
aws
azure
blog-posts
built-with-pulumi
cloudengineering
cloudengineering-support
content-share
contribex
contribute
docs
dotnet
finops
general
getting-started
gitlab
golang
google-cloud
hackathon-03-19-2020
hacktoberfest
install
java
jobs
kubernetes
learn-pulumi-events
linen
localstack
multi-language-hackathon
office-hours
oracle-cloud-infrastructure
plugin-framework
pulumi-cdk
pulumi-crosscode
pulumi-deployments
pulumi-kubernetes-operator
pulumi-service
pulumiverse
python
registry
status
testingtesting123
testingtesting321
typescript
welcome
workshops
yaml
Powered by Linen
google-cloud
  • h

    helpful-airport-41202

    08/28/2021, 4:33 AM
    I mean I found this: https://www.pulumi.com/docs/reference/pkg/google-native/cloudresourcemanager/v3/folder/
  • h

    helpful-airport-41202

    08/28/2021, 4:33 AM
    But I don't think that is referring to the folders within a Google Cloud Storage bucket.
  • h

    helpful-airport-41202

    08/28/2021, 4:36 AM
    Hmm, this SO answer says Google Cloud does not have actual "folders", and that the "folders" shown in the Cloud Storage UI are just visual displays based on object naming-patterns: https://stackoverflow.com/a/38417397/2441655
    l
    • 2
    • 1
  • l

    lemon-wire-69305

    08/30/2021, 12:16 AM
    Hi All, I'm deploying code to AppEngine as follows:
    const appEngineWww = new gcp.appengine.StandardAppVersion("appengine-www", {
      versionId: pulumi.interpolate`v${wwwCurrentVersion}`,
      service: "www",
      deleteServiceOnDestroy: false,
      runtime: "go115",
      deployment: {
        zip: {
          sourceUrl: pulumi.interpolate`<https://storage.googleapis.com/${bucketDeploys.name}/www/latest.zip>`,
        },
      },
      envVariables: {
        DEPLOY_TIME: deployTime
      },
      automaticScaling: {
        maxConcurrentRequests: 10,
        minIdleInstances: 0,
        maxIdleInstances: 3,
        minPendingLatency: "1s",
        maxPendingLatency: "5s",
        standardSchedulerSettings: {
          targetCpuUtilization: 0.5,
          targetThroughputUtilization: 0.75,
          minInstances: 0,
          maxInstances: 10,
        },
      },
    });
    This works, however I have to manually migrate traffic to this new version. Is there a way to have pulumi automatically migrate traffic to the new AppEngine version?
    m
    • 2
    • 1
  • m

    modern-napkin-96707

    08/31/2021, 12:51 PM
    Hi all, Is anyone deploying Airflow dags into Cloud Composer using pulumi? It’s easy enough to push the local dag file to the composer’s dag-bucket, but many times airflow tasks are dependent on resource outputs and I don’t know how to build the dag file at pulumi up runtime so as to embed the resulting output values into the dag before uploading. I could create a dynamic provider to achieve this, but I was just wondering if there’s an easier way.
  • p

    plain-potato-84679

    09/01/2021, 2:01 PM
    Hi! I managed to setup a GCP Cloud Run instance with the help of Pulumi Python SDK. What I couldn't find out was what would be the equivalent of this in Pulumi: I run this in the Google Cloud Shell and then use the image in my Pulumi script. How can I implement the whole process in my script:
    docker pull hasura/graphql-engine
    docker tag <http://docker.io/hasura/graphql-engine:latest|docker.io/hasura/graphql-engine:latest> <http://gcr.io/my-gcp-project/hasura|gcr.io/my-gcp-project/hasura>
    docker push <http://gcr.io/my-gcp-project/hasura|gcr.io/my-gcp-project/hasura>
    gcloud container images list-tags <http://gcr.io/my-gcp-project/hasura|gcr.io/my-gcp-project/hasura>
    Thanks for your help!
    b
    • 2
    • 3
  • a

    average-ability-11166

    09/01/2021, 9:53 PM
    Hi all. Is there a way to create credentials for an oauth provider via the Pulumi API? It's under APIs & Services > Credentials in GCP console.
  • b

    brash-cricket-30050

    09/09/2021, 11:25 AM
    Hi, am trying to use an image hosted on CGP Container Registry with Pulumi on a local k8s cluster However,
    pulumi up
    fails to pull the hosted image due to an authorization issue:
    [ErrImagePull] unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: <https://cloud.google.com/container-registry/docs/advanced-authentication>
    From the Docker command-line pulling a hosted image works fine though, so I feel that I've done the things hinted at in the doc link in the error message. I feel that the solution might not lie so much in Pulumi, but in my local k8s cluster (am using the k8s cluster provided by Docker Desktop btw). Any suggestions appreciated, been running around in circles for quite a bit, but haven't figured it out yet
    b
    • 2
    • 3
  • h

    helpful-van-82564

    09/09/2021, 2:58 PM
    Hi, I'm trying to create a custom role with the end result the ability to admin storage and service accounts, however it falls over :
    permissions=[
    "iam.serviceAccountAdmin",
    "storage.objectAdmin",
    ],
    with:
    googleapi: Error 400: Permission storage.objectAdmin is not valid., badRequest
    What am I doing wrong?
    b
    l
    • 3
    • 4
  • g

    green-dentist-53234

    09/16/2021, 9:29 AM
    Hi, I would like to ask you for advice on what I am doing wrong. I would like to create bigquery tables in google cloud via pulumi. The creation is fine, but even though I don't change anything, pulumi evaluates the schema change and tries to update the table, but it always ends up with an error:
    googleapi: Error 409: Already Exists: Table PROJECT:pulumi_test.pulumi_test, duplicate
    Can you think of anything I could be doing wrong? I'm creating the table this way:
    const dataset = new gcp.bigquery.Dataset('pulumi_test', {
    	datasetId: 'pulumi_test',
    	friendlyName: 'pulumi_test',
    	location: 'EU',
    	defaultTableExpirationMs: 3600000,
    	labels: {
    		env: 'default',
    	},
    });
    
    cont table = new gcp.bigquery.Table(tableName, {
    	datasetId: dataset.datasetId,
    	tableId: 'pulumi_test',
    	deletionProtection: false,
    	timePartitioning: {
    		type: 'DAY',
    	},
    	labels: {
    		env: 'default',
    	},
    	schema: '[{"name":"test","type":"STRING","mode":"NULLABLE","description":""}]',
    });
    Thank you very much for any advice.
  • a

    astonishing-gpu-28317

    09/17/2021, 7:55 PM
    anyone else noticed that cloud run updates are very very slow?
  • h

    hundreds-airport-37168

    09/26/2021, 8:07 PM
    Hi, is there any way to get the artifact registry URL? Theres one for container registry but since google now advice its user to switch to using the artifact registry instead.
    m
    • 2
    • 1
  • a

    ambitious-engine-26999

    09/27/2021, 7:59 AM
    Hi, I am trying to create a regional cluster that spread only on 2 zones something that can be done with : gcloud container clusters create manualtest --machine-type n1-standard-4 --num-nodes 1 --enable-autoscaling --max-nodes 4 --min-nodes 0 --region europe-west3 --node-locations europe-west3-a,europe-west3-b when doing so with pulumi (TypeScript) it spreads on the 3 zones either default nodepool or with "removeDefaultNodePool" a separate one.
    p
    • 2
    • 8
  • d

    dazzling-florist-77127

    09/27/2021, 5:31 PM
    Seems the examples might be out of date, getting loads of errors from the names provided in the examples:
    Diagnostics:
      gcp:compute:HttpHealthCheck (defaultHttpHealthCheck):
        error: 1 error occurred:
        	* Error creating HttpHealthCheck: googleapi: Error 400: Invalid value for field 'resource.name': 'defaultHttpHealthCheck-b494f35'. Must be a match of regex '(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)', invalid
  • h

    helpful-tent-95136

    09/29/2021, 8:59 AM
    Hi, I was wondering if anyone has had an issue with
    gcp.compute.InstanceFromMachineImage
    when using it across projects? For example:
    const myInstance = new gcp.compute.InstanceFromMachineImage(
      `my-instance`,
      {
        zone: 'australia-southeast1-a',
        project: 'vm-project',
        sourceMachineImage: 'projects/image-project/global/machineImages/my-image',
        networkInterfaces: [{ network: vpcId, subnetwork: mySubnet }],
      }
    );
    when I do this, it tries to look up
    sourceMachineImage
    from
    'projects/vm-project/global/machineImages/my-image'
    which of course doesn't exist 😕 Any ideas?
    m
    • 2
    • 6
  • a

    ambitious-engine-26999

    09/30/2021, 2:18 PM
    Hi, anyone have same problem with creating cluster with zones restriction ? https://pulumi-community.slack.com/archives/CRFUR2DGB/p1632729572047600
  • b

    bitter-apple-60976

    10/02/2021, 2:46 PM
    Hi everyone, I am trying to use https load balancer with apigateway to front multiple cloudrun services. To do that, I need to create a NetworkEndpoint for the ApiGateway, but I cant seem to find a way to do that with Pulumi. Is it not supported yet? or am I missing something. Thanks
  • p

    prehistoric-activity-61023

    10/06/2021, 5:51 PM
    Is there a way to automate deployments from GCP marketplace? I’d need to setup Elasticsearch cluster and AFAIK GCP does not offer any “native” managed solutions for that (like Memorystore for Redis or Cloud SQL for PostgreSQL). However, there is something available in the marketplace (link). Another option (just found it): Elastic has some official ECK support and it looks like it’s basically k8s operator (link). Automating that should be pretty easy. The question is: has anybody used that and can recommend it? 🙂
  • p

    prehistoric-activity-61023

    10/10/2021, 3:53 PM
    ^ nobody? 😞
  • d

    dry-sugar-63293

    10/11/2021, 7:59 AM
    Any help would be appreciated! thanks.
    • 1
    • 1
  • d

    dry-sugar-63293

    10/11/2021, 3:09 PM
    The "k8s.networking.v1.Ingress" has an Output called status that is supposed to contain a loadBalancer.ingress[0].ip node but the loadBalancer node is always empty {} even when the GCP console is showing proper structure as below?
    status:
      loadBalancer:
        ingress:
        - ip: XX.XX.XX.XX
    any clues?
    • 1
    • 1
  • m

    mysterious-zebra-57022

    10/11/2021, 8:01 PM
    👋 Hi everyone, we are having issues creating and importing cloud sql users
  • m

    mysterious-zebra-57022

    10/11/2021, 8:01 PM
    ➜  summa-develop git:(main) ✗ pulumi import gcp:sql/user:User ma-user "summa-develop/summa-5d40e47/muhammad.ali@summaft.com" --debug
    Previewing import (summa-develop):
         Type                 Name                         Plan       Info
         pulumi:pulumi:Stack  summa-develop-summa-develop             1 error; 13 debugs
     =   └─ gcp:sql:User      ma-user                      import     1 error
     
    Diagnostics:
      gcp:sql:User (ma-user):
        error: Preview failed: resource 'summa-develop/summa-5d40e47/muhammad.ali@summaft.com' does not exist
     
      pulumi:pulumi:Stack (summa-develop-summa-develop):
        debug: Authenticating using DefaultClient...
        debug:   -- Scopes: [<https://www.googleapis.com/auth/compute> <https://www.googleapis.com/auth/cloud-platform> <https://www.googleapis.com/auth/cloud-identity> <https://www.googleapis.com/auth/ndev.clouddns.readwrite> <https://www.googleapis.com/auth/devstorage.full_control> <https://www.googleapis.com/auth/userinfo.email>]
        debug: Authenticating using DefaultClient...
        debug:   -- Scopes: [<https://www.googleapis.com/auth/compute> <https://www.googleapis.com/auth/cloud-platform> <https://www.googleapis.com/auth/cloud-identity> <https://www.googleapis.com/auth/ndev.clouddns.readwrite> <https://www.googleapis.com/auth/devstorage.full_control> <https://www.googleapis.com/auth/userinfo.email>]
        debug: Waiting for state to become: [success]
        debug: Terraform is using this identity: <mailto:roarke.gaskill@summaft.com|roarke.gaskill@summaft.com>
        debug: Waiting for state to become: [success]
        debug: Instantiating Google SqlAdmin client for path <https://sqladmin.googleapis.com/>
        debug: Retry Transport: starting RoundTrip retry loop
        debug: Retry Transport: request attempt 0
        debug: Retry Transport: Stopping retries, last request was successful
        debug: Retry Transport: Returning after 1 attempts
        debug: Removing SQL User "<mailto:muhammad.ali@summaft.com|muhammad.ali@summaft.com>" because it's gone
        error: preview failed
  • m

    mysterious-zebra-57022

    10/11/2021, 8:02 PM
    that user exists already but pulumi doesn’t recognize it and pulumi up marks it for creation
  • d

    dry-sugar-63293

    10/12/2021, 11:03 AM
    Hi guys, has anyone used pulumi for managing redirect uris in "APIs and services" in the google cloud console (Credentials blade) .. Specifically I need help around managing redirect uris for my Oauth 2.0 client ids any pointers would be very helpful.
    p
    • 2
    • 11
  • f

    faint-area-23556

    10/14/2021, 8:46 PM
    anyone know if it's possible to import a CloudStorage bucket using the google-native provider?
  • h

    hallowed-cat-56281

    10/22/2021, 6:09 AM
    Hey there, 👋 I am facing some an interesting problem, we have been deploying our GCP infra through Pulumi for a year and it worked well overall, but we start noticing random failures more and more such as “no route found”. While not solving the problem, what this highlighted is that I probably should consider breaking up the monolith to reduce the feedback loop, and keep things separated and easier to navigate and deploy. We currently have one project (
    acme
    ) and two stacks (
    staging
    and
    prod
    , on two different GCP accounts). This project is getting relatively big (250 resources), and I was considering creating multiple projects (each with two stacks) - e.g.
    acme-storage
    ,
    acme-compute
    Here are my questions: 1. Is this a good approach or am I missing anything? 2. I tried creating one small project (e.g.
    acme-storage
    ) but I struggle when trying to export the existing state of the bigger project to import parts of it into the new project and I end-up with resources that need to be “replaced” when doing a
    pulumi preview
    . Is it just a matter of not cleaning up the exported stack file enough/properly before import? Should I even be doing this? Seems dirty.
  • w

    worried-helmet-23171

    10/22/2021, 3:32 PM
    question - is there a programmatic way to provide pulumi GCP credentials as opposed to setting the env. export GOOGLE_CREDENTIALS ect ? Additionally is it possible to use a service account token with pulumi ?
    👀 1
  • b

    bland-oxygen-26190

    10/24/2021, 7:40 PM
    Does anyone have any experience with using Pulumi in the context of defining and implementing GCP Landing Zones?
  • d

    dazzling-family-13566

    10/26/2021, 4:46 PM
    Sry for being very vague but it’s what these error messages brought to us … I know it’s likely to be from the underlying provider but at least a message that describes the underlying provider responses would be much more helpful than this. e.g. HTTP 400 bad request etc. especially for processes that runs in parallel. Store the response in each parallel process and return it please. Don’t just do an assertion and leave it …
    expected non-nil error with nil state during Create of urn:pulumi:***…$…$docker:index/remoteImage:RemoteImage::…
Powered by Linen
Title
d

dazzling-family-13566

10/26/2021, 4:46 PM
Sry for being very vague but it’s what these error messages brought to us … I know it’s likely to be from the underlying provider but at least a message that describes the underlying provider responses would be much more helpful than this. e.g. HTTP 400 bad request etc. especially for processes that runs in parallel. Store the response in each parallel process and return it please. Don’t just do an assertion and leave it …
expected non-nil error with nil state during Create of urn:pulumi:***…$…$docker:index/remoteImage:RemoteImage::…
View count: 2