flat-umbrella-41594
10/26/2022, 2:05 PMgreen-musician-49057
10/26/2022, 4:04 PMcleanup.policy
via the config
yields this error, with log verbosity set to 11:
I1026 06:38:44.052086 38298 provider_plugin.go:1617] provider received rpc error `Unknown`: `updating urn:pulumi:stack::project::kafka:index/topic:Topic::my.topic.name: 1 error occurred:
* Error waiting for topic (my.topic.name) to become ready: couldn't find resource (21 retries)
We know that the provider is able to communicate with the brokers, and crud operations on ACLs work fine.broad-toddler-72261
10/26/2022, 5:58 PMstraight-arm-50771
10/26/2022, 6:02 PM<https://get.pulumi.com/>
going to pull down v3.44.2
? the fatal error: concurrent map read and map write
has been driving me crazypolite-ocean-13631
10/26/2022, 7:39 PMremote
, which is described in the Python docstring as:
True if this is a remote component resource.What does it mean for something to be a "remote component resource"? I wasn't able to find any Pulumi docs that mention this.
cuddly-magician-97620
10/26/2022, 7:45 PMsteep-toddler-94095
10/26/2022, 9:17 PMpulumi preview
it says there is an update, but then when I view the details there is nothing displayed (as expected because there's not actually any diff). Is this a bug or is this how this package is supposed to work when the` update` parameter is filled out?wet-noon-14291
10/26/2022, 10:02 PMpulumi up
failing by being "killed", it happens my all the time now in one of our projects:
➜ deploy git:(deps/minimist_1.2.7) ✗ pulumi up
View Live: https://.....
[1] 3433262 killed pulumi up
➜ deploy git:(deps/minimist_1.2.7) ✗
clever-rose-11123
10/27/2022, 1:02 AMproud-art-41399
10/27/2022, 7:54 AMinfra
stack which provides basic resources for the rest of the stacks. One example is an ACM certificate which is managed by the infra
stack and used e.g. in an api
stack. Now when I update the infra
stack, it tries to replace the ACM certificate. It creates the new certificate but fails to delete the old one due to ResourceInUseException
exception because the certificate is in use by the resources managed by the api
stack (via a stack reference). I have to deploy the dependant stack so they use the new certificate and then re-redeploy the infra
stack.
Does this have any "standard" solution? I'm thinking of using an S3 bucket notifications which would trigger a Lambda function when the infra
stack (backed by S3 bucket) is updated, which would re-deploy the dependant stacks and then retry the deployment of infra
stack. But maybe there's a more elegant way.bumpy-laptop-30846
10/27/2022, 10:00 AMdamp-honey-93158
10/27/2022, 10:57 AMfierce-engine-31599
10/27/2022, 1:15 PMechoing-boots-57590
10/27/2022, 8:59 PMdamp-honey-93158
10/28/2022, 4:58 AMorange-airport-64592
10/28/2022, 8:17 AMFor those resources manually created in the production environment, I first generate the code throughI did some tests and had the following doubts and uncertainties:. Then, I use the same code but a different state to create resources, and this new state is connected to my new staging environment.pulumi import
I found that these import codes containattributes, andARN
is bound to account information, but even so, most resources can still be created successfully without making any changes, except for S3 buckets, for s3 buckets, I need to modify the bucket name property.ARN
I’m not sure that for these import codes which attributes I can modify without affecting the original prod environment, and which attributes I mustn’t modify. (I want one set of code to fit both environments)I have no idea. Is my plan suitable, and is there a better official one?
acceptable-xylophone-97331
10/28/2022, 1:21 PMacceptable-xylophone-97331
10/28/2022, 1:30 PMminiature-receptionist-24463
10/28/2022, 3:20 PMkind-country-41992
10/28/2022, 4:34 PMcurved-kitchen-23422
10/28/2022, 4:37 PMerror: Domain resource has a problem: expected ebs_ options
.0.volume _type to be one of [stamdard gp2 io1], got gp3. Examine
values at
‘`Domain`.EbsOptions
.`VolumeType`’. Based on AWS docs, r6g.large.search instance type is support to gp3 volume and using console we can able to view the gp3 option, but using pulumi got the error. Can any one help to resolve this issue and thanks for advance.cuddly-magician-97620
10/28/2022, 9:50 PMpulumi/aws
updates (somewhere between 4.0.0 and 5.18.0) has reversed the skip_final_snapshot
implicit default. It is now false
if not defined explicitly. At the same time, finalSnapshotIdentifier
is not a required input for aws.rds.Instance
resource.
You are setting people up for trouble with this. Creating aws.rds.Instance
resource with minimum required inputs results in skipFinalSnapshot: false
and empty finalSnapshotIdentifier
attribute. Try to destroy or replace such DB, and Pulumi barks final_snapshot_identifier is required when skip_final_snapshot is false
. Fair enough, except it should be required at DB creation time, and is not.rhythmic-tailor-1242
10/30/2022, 9:33 PMauth0
stack and it added clientId and clientSecret in a hashed format as part of the CLI set up.
How do I add more secrets in a hashed format to the yaml file?powerful-noon-84115
10/31/2022, 3:39 AMmillions-furniture-75402
10/31/2022, 3:38 PMlet awsProviderDefaults;
if (!process.env.AWS_ACCESS_KEY_ID) {
aws.sdk.config.credentials = new aws.sdk.SharedIniFileCredentials({ profile: awsConfig.get("profile") });
awsProviderDefaults = { profile: awsConfig.get("profile") };
} else {
awsProviderDefaults = {
accessKey: process.env.AWS_ACCESS_KEY_ID,
secretKey: process.env.AWS_SECRET_ACCESS_KEY,
token: aws.sdk.config.sessionToken,
};
}
const awsUsEast1 = new aws.Provider("east", {
region: "us-east-1",
...awsProviderDefaults,
});
salmon-motherboard-78006
10/31/2022, 7:45 PMaws:mwaa:Environment (dev-aqua-airflow):
error: 1 error occurred:
* error creating MWAA Environment: ValidationException: Failed to assume role arn:aws:iam::<account_id>:role/dev-airflow-execution-role. This could be due to the role's trust policy. Please ensure your role is assumable by '<http://airflow-env.amazonaws.com|airflow-env.amazonaws.com>' Service Principal and try again.
And this is what my execution role looks like:
mwaa_execution_role = aws.iam.Role(f"{stack}-airflow-execution-role",
name=f"{stack}-airflow-execution-role",
assume_role_policy=json.dumps({
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"<http://airflow.amazonaws.com|airflow.amazonaws.com>",
"<http://airflow-env.amazonaws.com|airflow-env.amazonaws.com>"
],
},
"Effect": "Allow",
},
]
}))
I then decided to create the S3 bucket, role and policies before creating the MWAA Environment and this is the error I’m getting now:
aws:mwaa:Environment (dev-aqua-airflow):
error: 1 error occurred:
* creating urn:pulumi:dev::data-ml-airflow::aws:mwaa/environment:Environment::dev-aqua-airflow: 1 error occurred:
* error waiting for MWAA Environment (dev-aqua-airflow-dd6bc3e) creation: unexpected state 'CREATE_FAILED', wanted target 'AVAILABLE'. last error: %!s(<nil>)
When I go to the AWS Console, this is the error I see:
Error code
INCORRECT_CONFIGURATION
Message
You may need to check the execution role permissions policy for your environment, and that each of the VPC networking components required by the environment are configured to allow traffic. Troubleshooting: <https://docs.aws.amazon.com/mwaa/latest/userguide/troubleshooting.html>
And this is my Pulumi MWAA code:
airflow_env = aws.mwaa.Environment(f"{stack}-aqua-airflow",
dag_s3_path="dags/",
execution_role_arn=mwaa_execution_role.arn,
airflow_version='2.2.2',
kms_key=mwaa_kms_key.arn,
logging_configuration=aws.mwaa.EnvironmentLoggingConfigurationArgs(
dag_processing_logs=aws.mwaa.EnvironmentLoggingConfigurationDagProcessingLogsArgs(
enabled=True,
log_level="DEBUG",
),
scheduler_logs=aws.mwaa.EnvironmentLoggingConfigurationSchedulerLogsArgs(
enabled=True,
log_level="INFO",
),
task_logs=aws.mwaa.EnvironmentLoggingConfigurationTaskLogsArgs(
enabled=True,
log_level="WARNING",
),
webserver_logs=aws.mwaa.EnvironmentLoggingConfigurationWebserverLogsArgs(
enabled=True,
log_level="ERROR",
),
worker_logs=aws.mwaa.EnvironmentLoggingConfigurationWorkerLogsArgs(
enabled=True,
log_level="CRITICAL",
),
),
network_configuration=aws.mwaa.EnvironmentNetworkConfigurationArgs(
security_group_ids=[vpc["vpcDefaultSecurityGroupID"]],
subnet_ids=[vpc["privateSubnetsIDs"][0], vpc["privateSubnetsIDs"][1]],
),
source_bucket_arn=airflow_dags_bucket.arn,
tags={
"Environment": f"{stack}",
},
opts=ResourceOptions(
depends_on=[mwaa_execution_role, mwaa_kms_key, airflow_dags_bucket]))
Any idea what I’m doing incorrectly?
I’m trying to look into this:
That your Amazon VPC is configured to allow network traffic between the different AWS resources used by your Amazon MWAA environment, as defined in About networking on Amazon MWAA. For example, your VPC security group must either allow all traffic in a self-referencing rule, or optionally specify the port range for HTTPS port range 443 and a TCP port range 5432.
fierce-horse-21860
10/31/2022, 8:21 PMPerforming query: 0d3bb4b3-7837-4420-b91e-3334042f2ba1-span-1
Error while querying: 0d3bb4b3-7837-4420-b91e-3334042f2ba1-span-1 (3702ms)
{
"processingId": 1,
"queueSize": 1,
"queryKey": [
[
"CREATE TABLE arch_council_app.cube_aws_billing_cost_by_account AS SELECT\n \"cube_aws_billing\".\"ACCOUNT_ALIAS\" \"cube_aws_billing__account_alias\", date_trunc('MONTH', CONVERT_TIMEZONE('UTC', \"cube_aws_billing\".\"BILL_DATE\"::timestamp_tz)::timestamp_ntz) \"cube_aws_billing__bill_date_month\", sum(\"cube_aws_billing\".\"SERVICE_COST\") \"cube_aws_billing__service_cost\"\n FROM\n \"ARCH_COUNCIL_APP\".\"AWS_BILLING\" AS \"cube_aws_billing\" GROUP BY 1, 2",
[]
],
[
[
{
"refresh_key": "463123"
}
]
]
],
"queuePrefix": "SQL_PRE_AGGREGATIONS_STANDALONE_default",
"timeInQueue": 1,
"preAggregationId": "cube_aws_billing.cost_by_account",
"newVersionEntry": {
"table_name": "arch_council_app.cube_aws_billing_cost_by_account",
"structure_version": "ulrf25hc",
"content_version": "bxxvnrki",
"last_updated_at": 1667246128534,
"naming_version": 2
},
"preAggregation": {
"preAggregationId": "cube_aws_billing.cost_by_account",
"timezone": "UTC",
"timestampFormat": "YYYY-MM-DD[T]HH:mm:ss.SSS[Z]",
"tableName": "arch_council_app.cube_aws_billing_cost_by_account",
"invalidateKeyQueries": [
[
"SELECT FLOOR((UNIX_TIMESTAMP()) / 3600) as refresh_key",
[],
{
"external": true,
"renewalThreshold": 120
}
]
],
"type": "rollup",
"external": true,
"previewSql": [
"SELECT * FROM arch_council_app.cube_aws_billing_cost_by_account LIMIT 1000",
[]
],
"preAggregationsSchema": "arch_council_app",
"loadSql": [
"CREATE TABLE arch_council_app.cube_aws_billing_cost_by_account AS SELECT\n \"cube_aws_billing\".\"ACCOUNT_ALIAS\" \"cube_aws_billing__account_alias\", date_trunc('MONTH', CONVERT_TIMEZONE('UTC', \"cube_aws_billing\".\"BILL_DATE\"::timestamp_tz)::timestamp_ntz) \"cube_aws_billing__bill_date_month\", sum(\"cube_aws_billing\".\"SERVICE_COST\") \"cube_aws_billing__service_cost\"\n FROM\n \"ARCH_COUNCIL_APP\".\"AWS_BILLING\" AS \"cube_aws_billing\" GROUP BY 1, 2",
[]
],
"sql": [
"SELECT\n \"cube_aws_billing\".\"ACCOUNT_ALIAS\" \"cube_aws_billing__account_alias\", date_trunc('MONTH', CONVERT_TIMEZONE('UTC', \"cube_aws_billing\".\"BILL_DATE\"::timestamp_tz)::timestamp_ntz) \"cube_aws_billing__bill_date_month\", sum(\"cube_aws_billing\".\"SERVICE_COST\") \"cube_aws_billing__service_cost\"\n FROM\n \"ARCH_COUNCIL_APP\".\"AWS_BILLING\" AS \"cube_aws_billing\" GROUP BY 1, 2",
[]
],
"uniqueKeyColumns": [
"\"cube_aws_billing__account_alias\"",
"\"cube_aws_billing__bill_date_month\""
],
"aggregationsColumns": [
"sum(\"cube_aws_billing__service_cost\")"
],
"dataSource": "default",
"granularity": "month",
"preAggregationStartEndQueries": [
[
"select min(CONVERT_TIMEZONE('UTC', \"cube_aws_billing\".\"BILL_DATE\"::timestamp_tz)::timestamp_ntz) from \"ARCH_COUNCIL_APP\".\"AWS_BILLING\" AS \"cube_aws_billing\"",
[]
],
[
"select max(CONVERT_TIMEZONE('UTC', \"cube_aws_billing\".\"BILL_DATE\"::timestamp_tz)::timestamp_ntz) from \"ARCH_COUNCIL_APP\".\"AWS_BILLING\" AS \"cube_aws_billing\"",
[]
]
],
"indexesSql": [],
"createTableIndexes": [],
"readOnly": false
},
"addedToQueueTime": 1667246128534
}
OperationFailedError: SQL access control error:
Insufficient privileges to operate on database 'GBI_OTHERS_DATA_ENG_DB'
at createError (/cube/node_modules/snowflake-sdk/lib/errors.js:536:15)
at Object.exports.createOperationFailedError (/cube/node_modules/snowflake-sdk/lib/errors.js:315:10)
at Object.callback (/cube/node_modules/snowflake-sdk/lib/services/sf.js:647:28)
at /cube/node_modules/snowflake-sdk/lib/http/base.js:111:25
at done (/cube/node_modules/urllib/lib/urllib.js:589:5)
at /cube/node_modules/urllib/lib/urllib.js:953:9
at decodeContent (/cube/node_modules/urllib/lib/urllib.js:740:14)
at handleResponseCloseAndEnd (/cube/node_modules/urllib/lib/urllib.js:924:7)
at IncomingMessage.<anonymous> (/cube/node_modules/urllib/lib/urllib.js:962:7)
at IncomingMessage.emit (events.js:412:35)
at IncomingMessage.emit (domain.js:475:12)
at endReadableNT (internal/streams/readable.js:1333:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
Error querying db: 0d3bb4b3-7837-4420-b91e-3334042f2ba1-span-1
--
"SELECT `cube_aws_billing__account_alias` `cube_aws_billing__account_alias`, sum(`cube_aws_billing__service_cost`) `cube_aws_billing__service_cost` FROM arch_council_app.cube_aws_billing_cost_by_account AS `cube_aws_billing__cost_by_account` GROUP BY 1 ORDER BY 2 DESC LIMIT 50"
--
{
"params": []
}
Error: SQL access control error:
Insufficient privileges to operate on database 'GBI_OTHERS_DATA_ENG_DB'
at QueryQueue.parseResult (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/QueryQueue.js:146:13)
at QueryQueue.executeInQueue (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/QueryQueue.js:135:19)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at PreAggregationLoader.loadPreAggregationWithKeys (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/PreAggregations.ts:742:7)
at preAggregationPromise (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/PreAggregations.ts:1946:28)
at QueryOrchestrator.fetchQuery (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/QueryOrchestrator.ts:158:59)
at OrchestratorApi.executeQuery (/cube/node_modules/@cubejs-backend/server-core/src/core/OrchestratorApi.ts:85:20)
at /cube/node_modules/@cubejs-backend/api-gateway/src/gateway.ts:1230:21
at async Promise.all (index 0)
at ApiGateway.getSqlResponseInternal (/cube/node_modules/@cubejs-backend/api-gateway/src/gateway.ts:1228:31)
at /cube/node_modules/@cubejs-backend/api-gateway/src/gateway.ts:1357:28
at async Promise.all (index 0)
at ApiGateway.load (/cube/node_modules/@cubejs-backend/api-gateway/src/gateway.ts:1348:23)
at /cube/node_modules/@cubejs-backend/api-gateway/src/sql-server.ts:101:13
Orchestrator error: 0d3bb4b3-7837-4420-b91e-3334042f2ba1-span-1 (3832ms)
--
{
"measures": [
"cube_aws_billing.service_cost"
],
"dimensions": [
"cube_aws_billing.account_alias"
],
"segments": [],
"order": [
[
"cube_aws_billing.service_cost",
"desc"
]
],
"limit": 50
}
--
{
"securityContext": {},
"appName": "NULL",
"protocol": "postgres",
"apiType": "sql"
}
Error: SQL access control error:
Insufficient privileges to operate on database 'GBI_OTHERS_DATA_ENG_DB'
2022-10-31 19:55:32,267 ERROR [cubejs_native::transport] [transport] load - strange response, success which contains error: V1Error { error: "Error: SQL access control error:\nInsufficient privileges to operate on database 'GBI_OTHERS_DATA_ENG_DB'" }
Cube SQL Error: undefined
{
"apiType": "sql",
"protocol": "postgres",
"appName": "NULL"
}
Error during processing PostgreSQL message: Internal: Execution error: Internal: Error: SQL access control error:
Insufficient privileges to operate on database 'GBI_OTHERS_DATA_ENG_DB'
However I am able to execute the create table as query with the
CUBEJS_DB_USER=gbi_others_data_eng_db_arch_council_user
CREATE TABLE arch_council_app.cube_aws_billing_cost_by_account AS SELECT
cube_aws_billing.ACCOUNT_ALIAS cube_aws_billing__account_alias,
date_trunc('MONTH', CONVERT_TIMEZONE('UTC', cube_aws_billing.BILL_DATE::timestamp_tz)::timestamp_ntz) cube_aws_billing__bill_date_month,
sum(cube_aws_billing.SERVICE_COST) cube_aws_billing__service_cost
FROM ARCH_COUNCIL_APP.AWS_BILLING AS cube_aws_billing
GROUP BY 1, 2
What am I missing here?little-whale-73288
11/01/2022, 8:39 AMv3.45.0
yet, so I can't install it using https://github.com/pulumi/setup-pulumi, is this WAI?orange-airport-64592
11/01/2022, 9:58 AMfrom pulumi_aws.apigateway import RestApi
from pulumi_aws_apigateway import RestAPI
hallowed-train-1850
11/01/2022, 1:44 PMhallowed-train-1850
11/01/2022, 1:44 PM