https://pulumi.com logo
Docs
Join the conversationJoin Slack
Channels
announcements
automation-api
aws
azure
blog-posts
built-with-pulumi
cloudengineering
cloudengineering-support
content-share
contribex
contribute
docs
dotnet
finops
general
getting-started
gitlab
golang
google-cloud
hackathon-03-19-2020
hacktoberfest
install
java
jobs
kubernetes
learn-pulumi-events
linen
localstack
multi-language-hackathon
office-hours
oracle-cloud-infrastructure
plugin-framework
pulumi-cdk
pulumi-crosscode
pulumi-deployments
pulumi-kubernetes-operator
pulumi-service
pulumiverse
python
registry
status
testingtesting123
testingtesting321
typescript
welcome
workshops
yaml
Powered by Linen
general
  • f

    flat-umbrella-41594

    10/26/2022, 2:05 PM
    Is there a way to debug a python pulumi program that is run via pulumi up? When I say debug I mean with breakpoints and not by printing
    r
    • 2
    • 2
  • g

    green-musician-49057

    10/26/2022, 4:04 PM
    Has anyone had issues using the Kafka provider to modify the configuration of RedPanda topics? We are able to use the provider to create and delete topics just fine, but updating the
    cleanup.policy
    via the
    config
    yields this error, with log verbosity set to 11:
    I1026 06:38:44.052086   38298 provider_plugin.go:1617] provider received rpc error `Unknown`: `updating urn:pulumi:stack::project::kafka:index/topic:Topic::my.topic.name: 1 error occurred:
    	* Error waiting for topic (my.topic.name) to become ready: couldn't find resource (21 retries)
    We know that the provider is able to communicate with the brokers, and crud operations on ACLs work fine.
  • b

    broad-toddler-72261

    10/26/2022, 5:58 PM
    Has anyone upgraded Pulumi from v1 to v3? Were you required to upgrade to v2 first?? Any gotchas etc???
  • s

    straight-arm-50771

    10/26/2022, 6:02 PM
    when is your
    <https://get.pulumi.com/>
    going to pull down
    v3.44.2
    ? the
    fatal error: concurrent map read and map write
    has been driving me crazy
    c
    b
    • 3
    • 11
  • p

    polite-ocean-13631

    10/26/2022, 7:39 PM
    Pulumi components have a parameter
    remote
    , which is described in the Python docstring as:
    True if this is a remote component resource.
    What does it mean for something to be a "remote component resource"? I wasn't able to find any Pulumi docs that mention this.
    e
    • 2
    • 1
  • c

    cuddly-magician-97620

    10/26/2022, 7:45 PM
    What is the latest stable version? Both, v3.44.1 and v3.44.2 (at least), are broken.
    e
    • 2
    • 4
  • s

    steep-toddler-94095

    10/26/2022, 9:17 PM
    I'm using the https://github.com/pulumi/pulumi-command package and am finding that on every
    pulumi preview
    it says there is an update, but then when I view the details there is nothing displayed (as expected because there's not actually any diff). Is this a bug or is this how this package is supposed to work when the` update` parameter is filled out?
  • w

    wet-noon-14291

    10/26/2022, 10:02 PM
    Anyone that has experience
    pulumi up
    failing by being "killed", it happens my all the time now in one of our projects:
    ➜  deploy git:(deps/minimist_1.2.7) ✗ pulumi up
    View Live: https://.....
    
    [1]    3433262 killed     pulumi up
    ➜  deploy git:(deps/minimist_1.2.7) ✗
    e
    • 2
    • 10
  • c

    clever-rose-11123

    10/27/2022, 1:02 AM
    Has anyone got any examples of writing native providers in python or js/ts? The native provider boilerplate from https://www.pulumi.com/docs/guides/pulumi-packages/how-to-author/ is only for go
    b
    e
    • 3
    • 15
  • p

    proud-art-41399

    10/27/2022, 7:54 AM
    Hi, I'm using Pulumi with self-managed S3 backend to manage AWS resources. I have an
    infra
    stack which provides basic resources for the rest of the stacks. One example is an ACM certificate which is managed by the
    infra
    stack and used e.g. in an
    api
    stack. Now when I update the
    infra
    stack, it tries to replace the ACM certificate. It creates the new certificate but fails to delete the old one due to
    ResourceInUseException
    exception because the certificate is in use by the resources managed by the
    api
    stack (via a stack reference). I have to deploy the dependant stack so they use the new certificate and then re-redeploy the
    infra
    stack. Does this have any "standard" solution? I'm thinking of using an S3 bucket notifications which would trigger a Lambda function when the
    infra
    stack (backed by S3 bucket) is updated, which would re-deploy the dependant stacks and then retry the deployment of
    infra
    stack. But maybe there's a more elegant way.
    f
    l
    • 3
    • 5
  • b

    bumpy-laptop-30846

    10/27/2022, 10:00 AM
    Hello, is it possible to know if a ressource actually exists when doing a preview? I am using typescript, but the question applies to all sdk’s I imagine.
    e
    • 2
    • 7
  • d

    damp-honey-93158

    10/27/2022, 10:57 AM
    general question about dependencies in pulumi - if I have a class derived from ComponentResource, lets call that the "Manager" class - and within the ctor of Manager I instantiate other resources using parent = this - will the stuff I'm instantiating in the ctor of Manager implicitly also depend on the same things that Manager is depending on (assuming its ComponentResourceOptions value has dependencies)?
    e
    • 2
    • 3
  • f

    fierce-engine-31599

    10/27/2022, 1:15 PM
    Hi! I wonder if it's possible to use different state backends with a runtime argument? Currently the only option I saw is to login to the backend and then every action is done against this backend
    e
    • 2
    • 4
  • e

    echoing-boots-57590

    10/27/2022, 8:59 PM
    is pulumi able to auto-detect drifts & auto-sync like crossplane?
    b
    • 2
    • 1
  • d

    damp-honey-93158

    10/28/2022, 4:58 AM
    I'm after some advice... I've got a few situations where we create our own ComponentResource - and it depends on something else that isn't an Output<string>, e.g. a component resource for our deployment of cert-manager (via a Release) depends on the re-injection of a Secret. In these cases I've written code that creates the dependant resource within an Apply() function - but I understand this isn't the best as preview won't see the creation of things within Apply(). What's the general patter to follow to avoid this?
    e
    • 2
    • 4
  • o

    orange-airport-64592

    10/28/2022, 8:17 AM
    Hi everyone, I have a problem that happens to manage multiple environments that belong to different accounts with pulumi. I want to ask everyone. I currently have a production environment on an AWS old account. It’s half created manually and half via pulumi. I plan to deploy another staging environment in a new AWS account dedicated to the tester for testing purposes. The staging and production environments belong to the Amazon cloud but are different accounts. My goal is to best manage both environments with the same pulumi codes. I envision doing this:
    For those resources manually created in the production environment, I first generate the code through
    pulumi import
    . Then, I use the same code but a different state to create resources, and this new state is connected to my new staging environment.
    I did some tests and had the following doubts and uncertainties:
    I found that these import codes contain
    ARN
    attributes, and
    ARN
    is bound to account information, but even so, most resources can still be created successfully without making any changes, except for S3 buckets, for s3 buckets, I need to modify the bucket name property.
    I’m not sure that for these import codes which attributes I can modify without affecting the original prod environment, and which attributes I mustn’t modify. (I want one set of code to fit both environments)
    I have no idea. Is my plan suitable, and is there a better official one?
  • a

    acceptable-xylophone-97331

    10/28/2022, 1:21 PM
    Hi,
  • a

    acceptable-xylophone-97331

    10/28/2022, 1:30 PM
    I am trying to create a cloudbuild Trigger Filename that reacts to pull requests in GitHub. I am using Python with GCP classic. I get the error: Error creating Trigger: googleapi: Error 400: Repository mapping does not exist. My code is: """ _ = gcp.cloudbuild.Trigger( filename= "ci_cd/version.cloudbuild.yaml", name="my_resource_name", project="my_project_id", resource_name="my_resource_name", github=cloudbuild.TriggerGithubArgs( pull_request=cloudbuild.TriggerGithubPullRequestArgs( branch="^main$" ), ), ) """ Thanks for your help.
  • m

    miniature-receptionist-24463

    10/28/2022, 3:20 PM
    Hello, I had "override_main_response_version" option enabled in the OpenSearch, even after removing it from index.ts, still Pulumi not deleting it from the stack. There is a diff between running stack and template. Any help. I see same problem was faced before but couldn't see the solution. https://pulumi-community.slack.com/archives/C84L4E3N1/p1632272442488400
    c
    • 2
    • 4
  • k

    kind-country-41992

    10/28/2022, 4:34 PM
    can help how to lable namespace of kubernets using pulumi. as i am trying to deploy istio on kubernets using pulumi python
  • c

    curved-kitchen-23422

    10/28/2022, 4:37 PM
    Hi team, We have deployed elastic search in AWS using pulumi(python). Later we have upgraded with OpenSearch 1.3 engine version and working fine. We are trying to migrate data node EBS volume from gp2 to gp3. But got the error like below
    error: Domain resource has a problem: expected ebs_ options
    .
    0.volume _type to be one of [stamdard gp2 io1], got gp3. Examine
    values at
    ‘`Domain`.
    EbsOptions
    .`VolumeType`’. Based on AWS docs, r6g.large.search instance type is support to gp3 volume and using console we can able to view the gp3 option, but using pulumi got the error. Can any one help to resolve this issue and thanks for advance.
  • c

    cuddly-magician-97620

    10/28/2022, 9:50 PM
    One of the recent
    pulumi/aws
    updates (somewhere between 4.0.0 and 5.18.0) has reversed the
    skip_final_snapshot
    implicit default. It is now
    false
    if not defined explicitly. At the same time,
    finalSnapshotIdentifier
    is not a required input for
    aws.rds.Instance
    resource. You are setting people up for trouble with this. Creating
    aws.rds.Instance
    resource with minimum required inputs results in
    skipFinalSnapshot: false
    and empty
    finalSnapshotIdentifier
    attribute. Try to destroy or replace such DB, and Pulumi barks
    final_snapshot_identifier is required when skip_final_snapshot is false
    . Fair enough, except it should be required at DB creation time, and is not.
    b
    • 2
    • 2
  • r

    rhythmic-tailor-1242

    10/30/2022, 9:33 PM
    Hi all, I created a new
    auth0
    stack and it added clientId and clientSecret in a hashed format as part of the CLI set up. How do I add more secrets in a hashed format to the yaml file?
    g
    • 2
    • 1
  • p

    powerful-noon-84115

    10/31/2022, 3:39 AM
    Hi team, I want to add Pulumi Tencent Cloud Provider to community package list, and I created the pull-request , hope the repo maintainer can notice and send me a feedback.
    e
    • 2
    • 1
  • m

    millions-furniture-75402

    10/31/2022, 3:38 PM
    What pattern are folks using for duplicating and amending their default provider configuration? I need to declare a second provider with same settings as default, but with a different region. Deploying locally is different than in CI.
    let awsProviderDefaults;
    if (!process.env.AWS_ACCESS_KEY_ID) {
      aws.sdk.config.credentials = new aws.sdk.SharedIniFileCredentials({ profile: awsConfig.get("profile") });
      awsProviderDefaults = { profile: awsConfig.get("profile") };
    } else {
      awsProviderDefaults = {
        accessKey: process.env.AWS_ACCESS_KEY_ID,
        secretKey: process.env.AWS_SECRET_ACCESS_KEY,
        token: aws.sdk.config.sessionToken,
      };
    }
    const awsUsEast1 = new aws.Provider("east", {
      region: "us-east-1",
      ...awsProviderDefaults,
    });
    l
    • 2
    • 14
  • s

    salmon-motherboard-78006

    10/31/2022, 7:45 PM
    Hi, I’m trying to set up a MWAA environment and I’m stuck 😞 Initially I was getting this error:
    aws:mwaa:Environment (dev-aqua-airflow):
        error: 1 error occurred:
            * error creating MWAA Environment: ValidationException: Failed to assume role arn:aws:iam::<account_id>:role/dev-airflow-execution-role. This could be due to the role's trust policy. Please ensure your role is assumable by '<http://airflow-env.amazonaws.com|airflow-env.amazonaws.com>' Service Principal and try again.
    And this is what my execution role looks like:
    mwaa_execution_role = aws.iam.Role(f"{stack}-airflow-execution-role",
                                       name=f"{stack}-airflow-execution-role",
                                       assume_role_policy=json.dumps({
                                           "Version": "2012-10-17",
                                           "Statement": [
                                               {
                                                   "Action": "sts:AssumeRole",
                                                   "Principal": {
                                                       "Service": [
                                                           "<http://airflow.amazonaws.com|airflow.amazonaws.com>",
                                                           "<http://airflow-env.amazonaws.com|airflow-env.amazonaws.com>"
                                                       ],
                                                   },
                                                   "Effect": "Allow",
                                               },
                                           ]
                                       }))
    I then decided to create the S3 bucket, role and policies before creating the MWAA Environment and this is the error I’m getting now:
    aws:mwaa:Environment (dev-aqua-airflow):
        error: 1 error occurred:
            * creating urn:pulumi:dev::data-ml-airflow::aws:mwaa/environment:Environment::dev-aqua-airflow: 1 error occurred:
            * error waiting for MWAA Environment (dev-aqua-airflow-dd6bc3e) creation: unexpected state 'CREATE_FAILED', wanted target 'AVAILABLE'. last error: %!s(<nil>)
    When I go to the AWS Console, this is the error I see:
    Error code
    INCORRECT_CONFIGURATION
    Message
    You may need to check the execution role permissions policy for your environment, and that each of the VPC networking components required by the environment are configured to allow traffic. Troubleshooting: <https://docs.aws.amazon.com/mwaa/latest/userguide/troubleshooting.html>
    And this is my Pulumi MWAA code:
    airflow_env = aws.mwaa.Environment(f"{stack}-aqua-airflow",
                                       dag_s3_path="dags/",
                                       execution_role_arn=mwaa_execution_role.arn,
                                       airflow_version='2.2.2',
                                       kms_key=mwaa_kms_key.arn,
                                       logging_configuration=aws.mwaa.EnvironmentLoggingConfigurationArgs(
                                           dag_processing_logs=aws.mwaa.EnvironmentLoggingConfigurationDagProcessingLogsArgs(
                                               enabled=True,
                                               log_level="DEBUG",
                                           ),
                                           scheduler_logs=aws.mwaa.EnvironmentLoggingConfigurationSchedulerLogsArgs(
                                               enabled=True,
                                               log_level="INFO",
                                           ),
                                           task_logs=aws.mwaa.EnvironmentLoggingConfigurationTaskLogsArgs(
                                               enabled=True,
                                               log_level="WARNING",
                                           ),
                                           webserver_logs=aws.mwaa.EnvironmentLoggingConfigurationWebserverLogsArgs(
                                               enabled=True,
                                               log_level="ERROR",
                                           ),
                                           worker_logs=aws.mwaa.EnvironmentLoggingConfigurationWorkerLogsArgs(
                                               enabled=True,
                                               log_level="CRITICAL",
                                           ),
                                       ),
                                       network_configuration=aws.mwaa.EnvironmentNetworkConfigurationArgs(
                                           security_group_ids=[vpc["vpcDefaultSecurityGroupID"]],
                                           subnet_ids=[vpc["privateSubnetsIDs"][0], vpc["privateSubnetsIDs"][1]],
                                       ),
                                       source_bucket_arn=airflow_dags_bucket.arn,
                                       tags={
                                           "Environment": f"{stack}",
                                       },
                                       opts=ResourceOptions(
                                           depends_on=[mwaa_execution_role, mwaa_kms_key, airflow_dags_bucket]))
    Any idea what I’m doing incorrectly? I’m trying to look into this:
    That your Amazon VPC is configured to allow network traffic between the different AWS resources used by your Amazon MWAA environment, as defined in About networking on Amazon MWAA. For example, your VPC security group must either allow all traffic in a self-referencing rule, or optionally specify the port range for HTTPS port range 443 and a TCP port range 5432.
    • 1
    • 3
  • f

    fierce-horse-21860

    10/31/2022, 8:21 PM
    I set up a Pre-Aggregation on Source Database Snowflake. However getting an error stating ‘Insufficient privileges to operate on database’
    Performing query: 0d3bb4b3-7837-4420-b91e-3334042f2ba1-span-1 
    Error while querying: 0d3bb4b3-7837-4420-b91e-3334042f2ba1-span-1 (3702ms)
    {
      "processingId": 1,
      "queueSize": 1,
      "queryKey": [
        [
          "CREATE TABLE arch_council_app.cube_aws_billing_cost_by_account AS SELECT\n      \"cube_aws_billing\".\"ACCOUNT_ALIAS\" \"cube_aws_billing__account_alias\", date_trunc('MONTH', CONVERT_TIMEZONE('UTC', \"cube_aws_billing\".\"BILL_DATE\"::timestamp_tz)::timestamp_ntz) \"cube_aws_billing__bill_date_month\", sum(\"cube_aws_billing\".\"SERVICE_COST\") \"cube_aws_billing__service_cost\"\n    FROM\n      \"ARCH_COUNCIL_APP\".\"AWS_BILLING\" AS \"cube_aws_billing\"  GROUP BY 1, 2",
          []
        ],
        [
          [
            {
              "refresh_key": "463123"
            }
          ]
        ]
      ],
      "queuePrefix": "SQL_PRE_AGGREGATIONS_STANDALONE_default",
      "timeInQueue": 1,
      "preAggregationId": "cube_aws_billing.cost_by_account",
      "newVersionEntry": {
        "table_name": "arch_council_app.cube_aws_billing_cost_by_account",
        "structure_version": "ulrf25hc",
        "content_version": "bxxvnrki",
        "last_updated_at": 1667246128534,
        "naming_version": 2
      },
      "preAggregation": {
        "preAggregationId": "cube_aws_billing.cost_by_account",
        "timezone": "UTC",
        "timestampFormat": "YYYY-MM-DD[T]HH:mm:ss.SSS[Z]",
        "tableName": "arch_council_app.cube_aws_billing_cost_by_account",
        "invalidateKeyQueries": [
          [
            "SELECT FLOOR((UNIX_TIMESTAMP()) / 3600) as refresh_key",
            [],
            {
              "external": true,
              "renewalThreshold": 120
            }
          ]
        ],
        "type": "rollup",
        "external": true,
        "previewSql": [
          "SELECT * FROM arch_council_app.cube_aws_billing_cost_by_account LIMIT 1000",
          []
        ],
        "preAggregationsSchema": "arch_council_app",
        "loadSql": [
          "CREATE TABLE arch_council_app.cube_aws_billing_cost_by_account AS SELECT\n      \"cube_aws_billing\".\"ACCOUNT_ALIAS\" \"cube_aws_billing__account_alias\", date_trunc('MONTH', CONVERT_TIMEZONE('UTC', \"cube_aws_billing\".\"BILL_DATE\"::timestamp_tz)::timestamp_ntz) \"cube_aws_billing__bill_date_month\", sum(\"cube_aws_billing\".\"SERVICE_COST\") \"cube_aws_billing__service_cost\"\n    FROM\n      \"ARCH_COUNCIL_APP\".\"AWS_BILLING\" AS \"cube_aws_billing\"  GROUP BY 1, 2",
          []
        ],
        "sql": [
          "SELECT\n      \"cube_aws_billing\".\"ACCOUNT_ALIAS\" \"cube_aws_billing__account_alias\", date_trunc('MONTH', CONVERT_TIMEZONE('UTC', \"cube_aws_billing\".\"BILL_DATE\"::timestamp_tz)::timestamp_ntz) \"cube_aws_billing__bill_date_month\", sum(\"cube_aws_billing\".\"SERVICE_COST\") \"cube_aws_billing__service_cost\"\n    FROM\n      \"ARCH_COUNCIL_APP\".\"AWS_BILLING\" AS \"cube_aws_billing\"  GROUP BY 1, 2",
          []
        ],
        "uniqueKeyColumns": [
          "\"cube_aws_billing__account_alias\"",
          "\"cube_aws_billing__bill_date_month\""
        ],
        "aggregationsColumns": [
          "sum(\"cube_aws_billing__service_cost\")"
        ],
        "dataSource": "default",
        "granularity": "month",
        "preAggregationStartEndQueries": [
          [
            "select min(CONVERT_TIMEZONE('UTC', \"cube_aws_billing\".\"BILL_DATE\"::timestamp_tz)::timestamp_ntz) from \"ARCH_COUNCIL_APP\".\"AWS_BILLING\" AS \"cube_aws_billing\"",
            []
          ],
          [
            "select max(CONVERT_TIMEZONE('UTC', \"cube_aws_billing\".\"BILL_DATE\"::timestamp_tz)::timestamp_ntz) from \"ARCH_COUNCIL_APP\".\"AWS_BILLING\" AS \"cube_aws_billing\"",
            []
          ]
        ],
        "indexesSql": [],
        "createTableIndexes": [],
        "readOnly": false
      },
      "addedToQueueTime": 1667246128534
    } 
    OperationFailedError: SQL access control error:
    Insufficient privileges to operate on database 'GBI_OTHERS_DATA_ENG_DB'
        at createError (/cube/node_modules/snowflake-sdk/lib/errors.js:536:15)
        at Object.exports.createOperationFailedError (/cube/node_modules/snowflake-sdk/lib/errors.js:315:10)
        at Object.callback (/cube/node_modules/snowflake-sdk/lib/services/sf.js:647:28)
        at /cube/node_modules/snowflake-sdk/lib/http/base.js:111:25
        at done (/cube/node_modules/urllib/lib/urllib.js:589:5)
        at /cube/node_modules/urllib/lib/urllib.js:953:9
        at decodeContent (/cube/node_modules/urllib/lib/urllib.js:740:14)
        at handleResponseCloseAndEnd (/cube/node_modules/urllib/lib/urllib.js:924:7)
        at IncomingMessage.<anonymous> (/cube/node_modules/urllib/lib/urllib.js:962:7)
        at IncomingMessage.emit (events.js:412:35)
        at IncomingMessage.emit (domain.js:475:12)
        at endReadableNT (internal/streams/readable.js:1333:12)
        at processTicksAndRejections (internal/process/task_queues.js:82:21)
    Error querying db: 0d3bb4b3-7837-4420-b91e-3334042f2ba1-span-1 
    --
    "SELECT `cube_aws_billing__account_alias` `cube_aws_billing__account_alias`, sum(`cube_aws_billing__service_cost`) `cube_aws_billing__service_cost` FROM arch_council_app.cube_aws_billing_cost_by_account AS `cube_aws_billing__cost_by_account` GROUP BY 1 ORDER BY 2 DESC LIMIT 50"
    --
    {
      "params": []
    } 
    Error: SQL access control error:
    Insufficient privileges to operate on database 'GBI_OTHERS_DATA_ENG_DB'
        at QueryQueue.parseResult (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/QueryQueue.js:146:13)
        at QueryQueue.executeInQueue (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/QueryQueue.js:135:19)
        at processTicksAndRejections (internal/process/task_queues.js:95:5)
        at PreAggregationLoader.loadPreAggregationWithKeys (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/PreAggregations.ts:742:7)
        at preAggregationPromise (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/PreAggregations.ts:1946:28)
        at QueryOrchestrator.fetchQuery (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/QueryOrchestrator.ts:158:59)
        at OrchestratorApi.executeQuery (/cube/node_modules/@cubejs-backend/server-core/src/core/OrchestratorApi.ts:85:20)
        at /cube/node_modules/@cubejs-backend/api-gateway/src/gateway.ts:1230:21
        at async Promise.all (index 0)
        at ApiGateway.getSqlResponseInternal (/cube/node_modules/@cubejs-backend/api-gateway/src/gateway.ts:1228:31)
        at /cube/node_modules/@cubejs-backend/api-gateway/src/gateway.ts:1357:28
        at async Promise.all (index 0)
        at ApiGateway.load (/cube/node_modules/@cubejs-backend/api-gateway/src/gateway.ts:1348:23)
        at /cube/node_modules/@cubejs-backend/api-gateway/src/sql-server.ts:101:13
    Orchestrator error: 0d3bb4b3-7837-4420-b91e-3334042f2ba1-span-1 (3832ms)
    --
    {
      "measures": [
        "cube_aws_billing.service_cost"
      ],
      "dimensions": [
        "cube_aws_billing.account_alias"
      ],
      "segments": [],
      "order": [
        [
          "cube_aws_billing.service_cost",
          "desc"
        ]
      ],
      "limit": 50
    }
    --
    {
      "securityContext": {},
      "appName": "NULL",
      "protocol": "postgres",
      "apiType": "sql"
    } 
    Error: SQL access control error:
    Insufficient privileges to operate on database 'GBI_OTHERS_DATA_ENG_DB'
    2022-10-31 19:55:32,267 ERROR [cubejs_native::transport] [transport] load - strange response, success which contains error: V1Error { error: "Error: SQL access control error:\nInsufficient privileges to operate on database 'GBI_OTHERS_DATA_ENG_DB'" }
    Cube SQL Error: undefined 
    {
      "apiType": "sql",
      "protocol": "postgres",
      "appName": "NULL"
    } 
    Error during processing PostgreSQL message: Internal: Execution error: Internal: Error: SQL access control error:
    Insufficient privileges to operate on database 'GBI_OTHERS_DATA_ENG_DB'
    However I am able to execute the create table as query with the
    CUBEJS_DB_USER=gbi_others_data_eng_db_arch_council_user
    CREATE TABLE arch_council_app.cube_aws_billing_cost_by_account AS SELECT 
    cube_aws_billing.ACCOUNT_ALIAS cube_aws_billing__account_alias, 
    date_trunc('MONTH', CONVERT_TIMEZONE('UTC', cube_aws_billing.BILL_DATE::timestamp_tz)::timestamp_ntz) cube_aws_billing__bill_date_month, 
    sum(cube_aws_billing.SERVICE_COST) cube_aws_billing__service_cost   
    FROM ARCH_COUNCIL_APP.AWS_BILLING AS cube_aws_billing  
    GROUP BY 1, 2
    What am I missing here?
    s
    • 2
    • 5
  • l

    little-whale-73288

    11/01/2022, 8:39 AM
    Hi, https://raw.githubusercontent.com/pulumi/docs/master/data/versions.json does not contain
    v3.45.0
    yet, so I can't install it using https://github.com/pulumi/setup-pulumi, is this WAI?
    e
    • 2
    • 1
  • o

    orange-airport-64592

    11/01/2022, 9:58 AM
    Hi, everyone, I would like to know what is the difference between the following two packages and their positioning? I see a big difference in their interfaces, some options are in 1 instead of 2
    from pulumi_aws.apigateway import RestApi
    from pulumi_aws_apigateway import RestAPI
    e
    c
    • 3
    • 3
  • h

    hallowed-train-1850

    11/01/2022, 1:44 PM
    Hi folks, running into an issue with pulumi-cdk / aws-native provider. It's the same issue here: https://github.com/pulumi/pulumi-aws-native/issues/610
    • 1
    • 1
Powered by Linen
Title
h

hallowed-train-1850

11/01/2022, 1:44 PM
Hi folks, running into an issue with pulumi-cdk / aws-native provider. It's the same issue here: https://github.com/pulumi/pulumi-aws-native/issues/610
does anyone have any workarounds they can suggest?
View count: 1