https://pulumi.com logo
Join the conversationJoin Slack
Channels
announcements
automation-api
aws
azure
blog-posts
built-with-pulumi
cloudengineering
cloudengineering-support
content-share
contribex
contribute
docs
dotnet
finops
general
getting-started
gitlab
golang
google-cloud
hackathon-03-19-2020
hacktoberfest
install
java
jobs
kubernetes
learn-pulumi-events
linen
localstack
multi-language-hackathon
office-hours
oracle-cloud-infrastructure
plugin-framework
pulumi-cdk
pulumi-crosscode
pulumi-deployments
pulumi-kubernetes-operator
pulumi-service
pulumiverse
python
registry
status
testingtesting123
testingtesting321
typescript
welcome
workshops
yaml
Powered by Linen
google-cloud
  • b

    best-summer-38252

    02/04/2023, 7:23 PM
    gcp.projects.Service doesnt block on the service actually being enabled so the dependent resources fail with "If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry." - even when you use dependsOn
    o
    • 2
    • 8
  • r

    refined-pilot-45584

    02/04/2023, 7:31 PM
    Additional note; my suggestion is to utilise stacks. For Google Cloud Best practices the enablement of services should be tightly bound to the establishment of a project. Consider the creation of a Pulumi stack that setups your base GCP projects. Enable Service APIs within that stack. Consider this as a Stack that operates as a GCP project factory. Creates project in a given org or folder; enables service APIs and then configures appropriate IAM and project dedicated service accounts as well as any Project specific Org Policies. Then going forward use the service account created in this stack as the runner for the follow on Pulumi Stacks that in turn build out project specific content such as VMs networks and GKE or Firestore; This gives time for stack A to enable APIs before stack B runs. Your stack B can even have logic to confirm APIs are enabled before commencing. Additionally it follows Google’s best practice project management and security by separating concerns and futures for security.
  • b

    best-summer-38252

    02/04/2023, 7:56 PM
    Thanks Tim. Seems like my app's stack knows what apis it needs enabled and that going back and forth with a separate project stack would couple the two and make the separation artificial.
  • b

    best-summer-38252

    02/04/2023, 8:07 PM
    Best practices are not universal for all contexts, I have not read security guidance from GCP specing this type of separation, but I know AWS' "Well Architect Framework" is garbage as the ideal and even worse as a universal that all must mimic. For my team's context I think projects should be disposable so the CD for the app can tear-down and provision its containing project. But thats a different discussion. If my project.Service and my cloudtask.Queue are on the same DAG with a dependent relationship (regardless of some other person's best practices) then shouldnt Pulumi make sure the Service is provisioned before continuing the walk to the cloudtask.Queue resource ?
  • s

    stocky-restaurant-98004

    02/06/2023, 4:33 PM
    👋
  • s

    straight-arm-50771

    02/08/2023, 7:52 PM
    Looking for some best practice guidance on Cloud Functions. Presently on v1, below spec. Looking for a method to keep the CFs up-to-date with HEAD on the identified branch. This doesn't appear possible natively via Pulumi/Terraform (per this ticket). It doesn't seem ideal either to add another CI component for a resource that's managed in IaC? We've had issues deploying CFv2 previously, I can revisit, is this better handled with v2?
    lora-iot-provision:
      type: gcp:cloudfunctions:Function
      properties:
        description: "Provisions a wireless device in AWS IoT."
        runtime: go119
        availableMemoryMb: 256
        environmentVariables:
          APP_ENV: ${iotEnv}
          AWS_REGION: us-east-1
        secretEnvironmentVariables:
          - key: AWS_ACCESS_KEY_ID
            secret: cf_iotAwsAccessKey_${environment}
            version: latest
            projectId: my-gcp-proj
          - key: AWS_SECRET_ACCESS_KEY
            secret: cf_iotAwsSecretKey_${environment}
            version: latest
            projectId: my-gcp-proj
        entryPoint: ProvisionDevicePubSub
        eventTrigger:
          eventType: providers/cloud.pubsub/eventTypes/topic.publish
          resource: projects/${pulumi.stack}/topics/lorawan-device-provisioned
        project: ${pulumi.stack}
        region: us-east1
        serviceAccountEmail: ${cloud-functions-sa.email}
        sourceRepository: 
          url: "<https://source.developers.google.com/projects/my-gcp-proj/repos/github_my-mirrored-repo/moveable-aliases/main/paths/lora-iot-provision>"
  • b

    brainy-caravan-45245

    02/09/2023, 5:15 PM
    has renamed the channel from "gcp" to "google-cloud"
  • s

    strong-belgium-33104

    02/10/2023, 2:38 AM
    Hi there, I have a question please. I am trying to automate projects creations using pulumi. I keep getting
    403 access denied
    . However, I am the owner of the gcp account and authenticated
    gcloud auth application-default login
    . Is there anything specific I need to do to be able to allow pulumi to provision the projects?
    o
    • 2
    • 7
  • k

    kind-island-70054

    02/10/2023, 2:33 PM
    Hi, I’m trying to add a domain to a managed ssl certificate. I’m ok replacing it because it’s just a dev project with no traffic. When I proceed with the update though, it fails because a target http proxy uses it. I would expect pulumi to remove first the target http proxy and then the ssl certificate, but it does not.
    Previewing update (infra.dev):
         Type                                       Name                                      Plan        Info
         pulumi:pulumi:Stack                        infrastructure-infra.dev
     +   ├─ gcp:compute:RegionNetworkEndpointGroup  global-lb-notification-rest-europe-west1  create
     +-  ├─ gcp:compute:ManagedSslCertificate       global-lb                                 replace     [diff: ~managed]
     +   ├─ gcp:compute:RegionNetworkEndpointGroup  global-lb-notification-rest-europe-west4  create
     +   ├─ gcp:compute:BackendService              global-lb-notification-rest               create
     ~   └─ gcp:compute:URLMap                      global-lb                                 update      [diff: ~hostRules,pathMatchers]
    
    
    Resources:
        + 3 to create
        ~ 1 to update
        +-1 to replace
        5 changes. 34 unchanged
    
    Do you want to perform this update? yes
    Updating (infra.dev):
         Type                                       Name                                      Status                   Info
         pulumi:pulumi:Stack                        infrastructure-infra.dev                  **failed**               1 error
     +   ├─ gcp:compute:RegionNetworkEndpointGroup  global-lb-notification-rest-europe-west4  created (11s)
     +   ├─ gcp:compute:RegionNetworkEndpointGroup  global-lb-notification-rest-europe-west1  created (11s)
     +-  └─ gcp:compute:ManagedSslCertificate       global-lb                                 **replacing failed**     1 error
    
    
    Diagnostics:
      pulumi:pulumi:Stack (infrastructure-infra.dev):
        error: update failed
    
      gcp:compute:ManagedSslCertificate (global-lb):
        error: deleting urn:pulumi:infra.dev::infrastructure::gcp:compute/managedSslCertificate:ManagedSslCertificate::global-lb: 1 error occurred:
        	* Error when reading or editing ManagedSslCertificate: googleapi: Error 400: The ssl_certificate resource 'projects/dev-julien-****/global/sslCertificates/global-lb' is already being used by 'projects/dev-julien-****/global/targetHttpsProxies/global-lb-35b3f02', resourceInUseByAnotherResource
    
    Outputs:
    Here is my pulumi code:
    const sslCertificate = new gcp.compute.ManagedSslCertificate(key, {
        name: key,
        managed: {
          domains: domains.map(({ domain }) => domain),
        },
      });
    
      const targetHttpsProxy = new gcp.compute.TargetHttpsProxy(
        key,
        {
          urlMap: urlMap.id,
          sslCertificates: [sslCertificate.name],
        });
    Is there a way to tell pulumi that it needs to remove dependencies too?
    • 1
    • 1
  • d

    delightful-monkey-90700

    02/14/2023, 4:45 AM
    I'm creating a Cloud Build job from a local directory. GCP Cloud Build expects the source code to be provided as a .tar.gz in a Bucket. There doesn't seem to be an obvious way to easily upload a directory as a .tar.gz file (just using
    FileAsset
    uploads it as a zip file, apparently and
    FileArchive
    provides no mechanism for specifying the kind of archive to produce).
  • m

    melodic-room-61098

    02/14/2023, 10:49 AM
    Hello #google-cloud. I've created a simple GCP Run service based on https://github.com/pulumi/examples/tree/master/gcp-ts-docker-gcr-cloudrun. I'm using a single
    index.ts
    and ran into an issue: the container image is being build and uploaded in parallel with the service update. This means that the service can only see the older image and won't update. So I have to run
    pulumi up
    2x, which is not ideal. Is there a way to declare this dependency in code? This is roughly what I have now: https://gist.github.com/thekarel/f8701649097eaf450d96bdf889db6d7c I've thought of using
    pulumi up --parallell 1
    (might slow things down) or putting the image and service in different folders (makes
    up
    more cumbersome). Any thoughts?
    • 1
    • 1
  • r

    refined-pilot-45584

    02/14/2023, 10:56 PM
    Hey all; Question. Is service/api enablement on a GCP project possible from the Google Native provider? I am trying to find the Native corresponding functionality to this Classic resource/function: https://www.pulumi.com/registry/packages/gcp/api-docs/projects/service/
    s
    • 2
    • 2
  • d

    delightful-monkey-90700

    02/15/2023, 6:48 PM
    The GCP Native
    cloudbuild.v1.Build()
    doesn't seem to be finished yet, is there anything equivalent for GCP Classic ?
  • j

    jolly-journalist-76169

    02/16/2023, 9:53 AM
    Hope someone can help me with my problem! I would like to keep the state of Pulumi in GS (
    Google Storage
    ). What am I doing now? 1. I use the module "npm googleapis" where I log in via
    OAuth2 to GCP
    and get the accessToken - I have it stored under the variable. 2. I would like to use my accessToken in Pulumi to hold the state in GS. I have read about setting the
    accessToken
    (https://www.pulumi.com/registry/packages/gcp/installation-configuration/#configuration-reference;
    pulumi config set gcp:accessToken
    ), but all attempts end up forcing me to log into
    Pulumi Service
    - and I would like to avoid that. Actually, the matter would be solved if someone would show me how to use the
    pulumi login gs://
    command and pass my accessToken in it. I hope there will be someone here with knowledge!
  • g

    gentle-intern-40981

    02/21/2023, 10:07 PM
    Hi all, I started off by creating an
    aws-python
    project which worked fine but after installing
    pulumi_gcp
    , adding the required config
    gcp:project
    and adding a
    gcp
    resource the pulumi preview fails with:
    error: could not validate provider configuration: 1 error occurred:
            * Invalid or unknown key
    Debug output show:
    debug: exception when preparing or executing rpc: Traceback (most recent call last):
          File "/home/dissonance/Code/ouroboros/infrastructure/venv/lib/python3.10/site-packages/pulumi/runtime/resource.py", line 916, in do_rpc_call
            return monitor.RegisterResource(req)
          File "/home/dissonance/Code/ouroboros/infrastructure/venv/lib/python3.10/site-packages/grpc/_channel.py", line 946, in __call__
            return _end_unary_response_blocking(state, call, False, None)
          File "/home/dissonance/Code/ouroboros/infrastructure/venv/lib/python3.10/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
            raise _InactiveRpcError(state)
        grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
            status = StatusCode.UNAVAILABLE
            details = "error reading from server: read tcp 127.0.0.1:52012->127.0.0.1:33739: use of closed network connection"
            debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:40435 {created_time:"2023-02-21T16:43:34.450732655-05:00", grpc_status:14, grpc_message:"error reading from server: read tcp
     127.0.0.1:52012->127.0.0.1:33739: use of closed network connection"}"
    If I create a new project using the
    gcp-python
    template this problem does not exist.
    b
    • 2
    • 1
  • g

    great-sunset-355

    02/22/2023, 10:55 AM
    Can anyone please confirm that this is the correct way to ignore changes in the image? of this resource https://www.pulumi.com/registry/packages/gcp/api-docs/cloudrunv2/job/#jobtemplatetemplatecontainer
    ignoreChanges: [
              "template.template.containers[*].image",
            ],
    Because I also tried
    ignoreChanges: ["*"]
    and it did not work at all. After some experiments, I noticed this works
    "template.template.containers[0].image"
  • d

    delightful-monkey-90700

    02/23/2023, 9:38 PM
    The google-native plugin for
    cloudbuild.v1.Build()
    fails because the provider is base64 encoding a UUID string:
    error: waiting for completion / read state googleapi: Error 404: Requested entity was not found. (URL=<https://cloudbuild.googleapis.com/v1/projects/production/locations/us-west2/builds/NTBhNjY5MjMtZTJmYy00YTE0LWI5ZjQtNmEwZWVkNGIwMWIw>): polling operation status: googleapi: Error 404: Requested entity was not found.
    -->
    > echo 'NTBhNjY5MjMtZTJmYy00YTE0LWI5ZjQtNmEwZWVkNGIwMWIw' | base64 -d
    50a66923-e2fc-4a14-b9f4-6a0eed4b01b0
    --> The real URL should have been
    <https://cloudbuild.googleapis.com/v1/projects/production/locations/us-west2/builds/50a66923-e2fc-4a14-b9f4-6a0eed4b01b0>
    b
    • 2
    • 6
  • b

    better-pencil-34948

    02/24/2023, 5:34 PM
    I am looking to understand Stack References; in particular, ones where the Stack Reference is on a different Google Storage bucket than the stack being used. Scenario: • Stack Account - configures an private artifact registry with charts, docker images, etc, along with a secret key Output. Hosted in, say, gs://bucket/account/ • Stack B - brings up a kubernetes cluster, networking, etc. Hosted in, say, gs://otherbucket/b/. • Problem: In Stack B's code, I want to reference Stack A to bring down the secret key and allow pulling from Stack A's output. Result (go SDK):
    pulumi:pulumi:StackReference (<gs://bucket/account>):
    error: Preview failed: unknown stack "account"
    b
    • 2
    • 16
  • r

    rich-motorcycle-3089

    02/27/2023, 7:10 PM
    Has anyone been able to delete health checks from a Cloud Run service? I have a Startup Probe and a Liveness probe defined in my Cloud Run service driven off some config file values. When I clear out those values, I wanted to remove the appropriate health checks. Filling the appropriate values with
    nil
    seem to cause Pulumi to ignore the health checks altogether (keep the existing values for the next revision). Attempting to give it empty objects causes Pulumi to overwrite the values, but with default implementations of the health check. I’m using the Go SDK
  • b

    best-summer-38252

    02/28/2023, 7:18 PM
    Im tying to add an IAM policy to a service account as per https://www.pulumi.com/registry/packages/gcp/api-docs/serviceaccount/iammember/ The policy has a list of bindings and every role I have tried provided results in an Error 400 (other than "roles/iam.serviceAccountUser"):
    Error 400: Role roles/workflow.invoker is not supported for this resource., badRequest
    Surely a service account can have a role as per the Pulumi example. The exmaple shows the format of the role being just the role name, roles/iam.serviceAccountUser, which seems consistent with the type info: _The role that should be applied Only one
    gcp.organizations.IAMBinding
    can be used per role. Note that custom roles must be of the format
    organizations/{{org_id}}/roles/{{role_id}}
    ._ Given I am not using custom roles, is
    roles/workflow.invoker
    the correct format?
    e
    • 2
    • 4
  • c

    clean-winter-59829

    03/01/2023, 5:56 AM
    how do you get the current GCP project of the stack in pulumi?
    b
    e
    • 3
    • 5
  • s

    stocky-restaurant-98004

    03/01/2023, 6:29 PM
    I'm gonna write a basic clickops -> Pulumi importer for Google Cloud, similar to the one I wrote for AWS in this blog post. Anyone got a basic Google Cloud architecture, e.g. in TF that I could use as source material? Thinking something like a VPC and some VMs, or maybe a GKE cluster. (Not sure how common VPC/VMs are on Google Cloud.)
  • p

    purple-electrician-80135

    03/02/2023, 12:54 AM
    Hello GCP channel ... I am getting a strange error deploying an autopilot cluster and I'm unsure what I'm doing wrong.
    python inline source runtime error: 'Cluster' object is not callable
    From this code:
    def create_gks_autopilot_cluster(project_id, name, region, network_id, subnet_id):
    
        gke_min_version = "1.25.6-gke.200"
        default = gcp.serviceaccount.Account("default",
                                             account_id="service-account-id",
                                             display_name="Service Account")
    
        # Define the GKE Autopilot cluster
        gke_cluster = gcp.container.Cluster(name,
                                            enable_autopilot=True,
                                            ip_allocation_policy=container.ClusterIpAllocationPolicyArgs(
                                                cluster_secondary_range_name="pods",
                                                services_secondary_range_name="services",
                                            ),
                                            location=region,
                                            min_master_version=gke_min_version,
                                            network=network_id,
                                            release_channel={"channel": "STABLE"},
                                            subnetwork=subnet_id,
                                            project=project_id,)
        return gke_cluster
    Is there anything obvious I should be doing differently? .. this is running in a Jupyter notebook .. which has made configs unavailable (probably due being unable to find the .yaml file) but otherwise seems to work.
    • 1
    • 1
  • m

    many-knife-65312

    03/03/2023, 11:41 PM
    👋
  • m

    many-knife-65312

    03/03/2023, 11:42 PM
    i'm trying to use the
    .get()
    function to check for existing gcp resources, but i'm struggling with the unique provider ID, does anyone have docs or tips for using
    .get()
    ?
    d
    s
    • 3
    • 17
  • g

    gorgeous-architect-28903

    03/09/2023, 12:14 PM
    Anyone seen this when creating a GKE Node Pool?
    Cannot specify both name and name_prefix for a node_pool
    — I’m definitely not setting a name. It happens even if I set
    Name
    to
    nil
    explicitly.
    v
    • 2
    • 4
  • l

    limited-wolf-14679

    03/09/2023, 11:51 PM
    Hi Guys, i am new to pulumi and trying to deploy kubeflow on gcp. I am using pulumi python and GCP...and deployed pulumi kuberntes-gcp-python and now I would like to deploy kubeflow but I am stuck. Any help ? I have tried to run the following code but no success:
    # new kubeflow
    kubeflow = gcp.container.Registry("kubeflow")
    
    deployment = Deployment(
        "kubeflow-deployment",
        spec=DeploymentSpecArgs(
            replicas=1,
            selector=LabelSelectorArgs(
                match_labels={
                    "app": "kubeflow",
                },
            ),
            template=PodTemplateSpecArgs(
                metadata=ObjectMetaArgs(
                    labels={
                        "app": "kubeflow",
                    },
                ),
                spec=PodSpecArgs(
                    containers=[
                        ContainerArgs(
                            name="kubeflow",
                            image="kubeflow",
                            env=[
                                EnvVarArgs(
                                    name="NAMESPACE",
                                    value="kubeflow",
                                ),
                            ],
                            command=["/bin/bash"],
                            args=[
                                "-c",
                                "/opt/deploy.sh",
                            ]
                            
                        )
                    ]
    
                )
            )
        ),
        metadata=ObjectMetaArgs(
            labels={
                "app": "kubeflow",
            }
        )
    )
    
    pulumi.export("name", deployment.metadata["name"])
    
    # Allocate an IP to the Deployment.
    app_name = "kubeflow"
    app_labels = { "app": app_name }
    frontend = Service(
        app_name,
        metadata={
            "labels": deployment.spec["template"]["metadata"]["labels"],
        },
        spec={
            "type":  "LoadBalancer",
            "ports": [{ "port": 80, "target_port": 80, "protocol": "TCP" }],
            "selector": app_labels,
        })
    
    # When "done", this will print the public IP.
    result = None
    
    ingress = frontend.status.apply(lambda v: v["load_balancer"]["ingress"][0] if "load_balancer" in v else None)
    if ingress is not None:
        result = ingress.apply(lambda v: v["ip"] if "ip" in v else v["hostname"])
    
    pulumi.export("ip", result)
    I am getting the following error: * the Kubernetes API server reported that "default/kubeflow-deployment-d5cb3c03" failed to fully initialize or become live: 'kubeflow-deployment-d5cb3c03' timed out waiting to be Ready * [MinimumReplicasUnavailable] Deployment does not have minimum availability. * [ProgressDeadlineExceeded] ReplicaSet "kubeflow-deployment-d5cb3c03-769cdfbd67" has timed out progressing. * Minimum number of live Pods was not attained * [Pod kubeflow-deployment-d5cb3c03-769cdfbd67-4lsjp]: containers with unready status: [kubeflow] -- [ImagePullBackOff] Back-off pulling image "kubeflow"
    p
    • 2
    • 1
  • v

    victorious-florist-84818

    03/14/2023, 9:49 AM
    Hey, I have a question regarding the google native provider for Pulumi. Is it fully maintained by Pulumi, or is Google supporting this as well?
    b
    • 2
    • 8
  • b

    billions-hydrogen-34268

    03/15/2023, 5:43 PM
    I want to create a Log Router Sink. Is a ProjectSink what I should use?
    s
    • 2
    • 4
  • c

    chilly-garage-80867

    03/15/2023, 8:00 PM
    Anyone deploying Autopilot GKE getting this error? ``````
Powered by Linen
Title
c

chilly-garage-80867

03/15/2023, 8:00 PM
Anyone deploying Autopilot GKE getting this error? ``````
View count: 1