worried-helmet-23171
06/14/2022, 12:28 AMworried-helmet-23171
06/14/2022, 12:29 AMcool-computer-40158
06/14/2022, 9:23 AMcold-orange-37453
06/15/2022, 12:50 PMInputDiff is true if this is a difference between old and new inputs rather than old state and new inputs.
, am I correct to understand that whenever there is some external change made to a resource, for example a tag is changed for AWS sqs, the InputDiff for that change will be False, whereas if I change the tag for AWS SQS in pulumi code, then InputDiff will be True ?plain-pillow-11037
06/22/2022, 6:38 PMprotected
. For the purposes of testing tear down/resource cleanup I need to unprotect
the resources before destroying the stack. What’s the best way to do this?ancient-solstice-53934
06/23/2022, 12:22 PMlemon-lamp-41193
07/01/2022, 2:24 AM# New Pulumi Project
EngineEvent(
...
resource_pre_event=ResourcePreEvent(
metadata=StepEventMetadata(
op=<OpType.SAME: 'same'>,
urn='urn:pulumi:dev::...
# Old Pulumi Project
EngineEvent(
...
resource_pre_event=ResourcePreEvent(
metadata={
'op': 'same',
'urn': 'urn:pulumi:dev::...
mammoth-garden-53682
07/01/2022, 11:38 PMauto.GitRepo
:
failed to create stack: failed to create workspace, unable to enlist in git repo: unable to checkout branch: reference not found
using a PAT. Config is straight from example and looks like this:
auth := &auto.GitAuth{
PersonalAccessToken: cfg.AuthToken,
}
repo := auto.GitRepo{
Auth: auth,
URL: cfg.Repo,
Branch: cfg.Branch,
}
ctx := context.Background()
s, err := auto.NewStackRemoteSource(ctx, "dev", repo)
If I don’t specify branch then it works…quiet-laptop-13439
07/11/2022, 10:20 AMquiet-laptop-13439
07/27/2022, 9:51 AMacoustic-window-73051
07/27/2022, 3:38 PMmost-lighter-95902
07/28/2022, 3:56 AMmost-lighter-95902
07/28/2022, 3:56 AMThe Pulumi runtime detected that 148 promises were still active
at the time that the process exited. There are a few ways that this can occur:
* Not using `await` or `.then` on a Promise returned from a Pulumi API
* Introducing a cyclic dependency between two Pulumi Resources
* A bug in the Pulumi Runtime
most-lighter-95902
07/28/2022, 3:58 AMmammoth-garden-53682
07/28/2022, 7:09 PMoptup/optdestroy.EventStreams
to stream changes to one or more clients. The events are dispatched as expected but when my operation completes the channel I provide is continuously spammed with events that have null values and no sequence. Marshaled event:
INFO[0028] { 'event': {"sequence":0,"timestamp":0,"Error":null} }
Is this…expected? Am I not able to use long-lived channels for for event streams? My code is fairly straightforward:
...
ctx := context.Background()
upResp, err := stack.Up(ctx, optup.EventStreams(ec))
...
// and in another goroutine
...
for {
select {
case <-sl.Done:
return
case e := <-<http://sl.ec|sl.ec>:
me, _ := json.Marshal(e)
log.Infof("{ 'event': %s }", me)
}
}
gorgeous-accountant-60580
07/28/2022, 9:02 PMkind-hamburger-15227
08/03/2022, 11:25 AMvictorious-memory-43562
08/05/2022, 2:23 AMminiature-leather-70472
08/08/2022, 3:07 PMcold-orange-37453
08/11/2022, 6:29 AMcold-orange-37453
08/11/2022, 7:54 AMsalmon-musician-20405
08/11/2022, 7:33 PMmicroscopic-postman-4756
08/12/2022, 8:11 PMhttps://www.youtube.com/watch?v=ZbZkzVGzB2k▾
kind-napkin-54965
08/19/2022, 11:53 AMup_res = stack.up(on_output=print)
. It outputs:
E0819 14:48:02.366739000 4372022656 <http://fork_posix.cc:76]|fork_posix.cc:76]> Other threads are currently calling into gRPC, skipping fork() handlers
Updating (<redacted>/dev)
View Live: <https://app.pulumi.com/><redacted>/gcp-python/dev/updates/6
pulumi:pulumi:Stack gcp-python-dev running
gcp:storage:Bucket my-bucket-new
pulumi:pulumi:Stack gcp-python-dev
Outputs:
bucket_name: "<gs://my-bucket-new-7aa5ca0>"
Resources:
2 unchanged
Duration: 1s
E0819 14:48:09.979133000 4372022656 <http://fork_posix.cc:76]|fork_posix.cc:76]> Other threads are currently calling into gRPC, skipping fork() handlers
E0819 14:48:11.208918000 4372022656 <http://fork_posix.cc:76]|fork_posix.cc:76]> Other threads are currently calling into gRPC, skipping fork() handlers
E0819 14:48:12.668303000 4372022656 <http://fork_posix.cc:76]|fork_posix.cc:76]> Other threads are currently calling into gRPC, skipping fork() handlers
Any ideas what's going on here? Running on zsh/mac m1.kind-napkin-54965
08/19/2022, 12:02 PMos.environ['GRPC_ENABLE_FORK_SUPPORT'] = "false"
.bulky-yacht-13037
08/23/2022, 2:24 PMauto.Workspace
interface?
I’ve had some issues with the existing auto.LocalWorkspace
one (most probably because of lack of experience w/ Pulumi) and would really prefer a “native” implementation instead of relying on having the pulumi
CLI installed in the systems we’ll use it.silly-smartphone-71988
08/24/2022, 11:54 AMLocalWorkspace.CreateOrSelectStackAsync
which works great. But is there a way to remove a stack (not destroy) using pulumi.automation? Right now I have to do it using the CLI pulumi stack rmbulky-yacht-13037
08/24/2022, 12:02 PMopts := []auto.LocalWorkspaceOption{
...
auto.SecretsProvider("awskms://..."),
}
stack, err := auto.UpsertStackLocalSource(ctx, "dev", "example", opts...)
if err != nil {
log.Fatalf("Failed to load dev stack: %s\n", err)
}
_, err = stack.Refresh(ctx, optrefresh.ProgressStreams(os.Stdout))
if err != nil {
log.Fatalf("Failed to refresh dev stack: %s\n", err)
}
Results in the following error:
[...] passphrase must be set with PULUMI_CONFIG_PASSPHRASE or PULUMI_CONFIG_PASSPHRASE_FILE environment variables [...]
bulky-yacht-13037
08/24/2022, 2:58 PMabundant-vegetable-12924
08/24/2022, 8:32 PMaws:region
and nothing else really, but the thing i'm working on has several config parameters....are people setting up [in python] configparser style .ini or .yaml files (maybe tsconfig.json if you're doing typescript?) and reading that in......or is there some pulumi magics? or something different? 🙂abundant-vegetable-12924
08/24/2022, 8:32 PMaws:region
and nothing else really, but the thing i'm working on has several config parameters....are people setting up [in python] configparser style .ini or .yaml files (maybe tsconfig.json if you're doing typescript?) and reading that in......or is there some pulumi magics? or something different? 🙂millions-furniture-75402
08/24/2022, 9:00 PMbored-oyster-3147
08/26/2022, 7:31 PMabundant-vegetable-12924
08/26/2022, 7:33 PMbored-oyster-3147
08/26/2022, 7:38 PMabundant-vegetable-12924
08/26/2022, 7:42 PMconfig = pulumi.Config()
...unfortunately that doesn't work with an automation api "inline" context...you get the old "pulumi.errors.RunError: Program run without the Pulumi engine available; re-run using the pulumi
CLI" error....will continue beating head againstbored-oyster-3147
08/29/2022, 4:26 PMmillions-furniture-75402
08/29/2022, 4:56 PMbored-oyster-3147
08/29/2022, 5:01 PMmillions-furniture-75402
08/29/2022, 5:01 PMbored-oyster-3147
08/29/2022, 5:30 PMabundant-vegetable-12924
08/29/2022, 5:41 PM"""An AWS Python Pulumi program"""
import argparse
import os
import pulumi
from pulumi import log
from pulumi import automation as auto
import pulumi_aws as aws
import pulumi_random as random
parser = argparse.ArgumentParser()
parser.add_argument("-e", "--env", choices=["dev", "staging", "production"], required=True)
parser.add_argument("-f", "--function", choices=["bootstrap", "manage"], required=True)
parser.add_argument("-d", "--destroy", action='store_true', default=False)
args = parser.parse_args()
org = 'talentpair'
aws_region = 'us-west-2'
project_name="database"
<http://log.info|log.info>(f"function: {args.function}")
<http://log.info|log.info>(f"environment: {args.env}")
def ensure_plugins():
<http://log.info|log.info>("loading plugins...")
ws = auto.LocalWorkspace()
ws.install_plugin("aws", "v5.5.0")
ws.install_plugin("random", "v4.4.2")
ensure_plugins()
def bootstrap():
rds_enhanced_monitoring_role = aws.iam.Role(
"rds-enhanced-monitoring",
assume_role_policy="""{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "<http://monitoring.rds.amazonaws.com|monitoring.rds.amazonaws.com>"
},
"Effect": "Allow",
"Sid": ""
}
]
}""",
managed_policy_arns=[aws.iam.ManagedPolicy.AMAZON_RDS_ENHANCED_MONITORING_ROLE,],
)
# we create a db snapshot from the original database and
# will use that to spin up the new instances
if source_db_id is not None:
source_db_instance = aws.rds.get_instance(db_instance_identifier=source_db_id)
origin_snapshot = aws.rds.Snapshot(f"{args.env}-source-snapshot",
db_instance_identifier=source_db_id,
db_snapshot_identifier=f"{args.env}-source-snapshot"
)
postgresql_cluster = aws.rds.Cluster(
f"postgresql-{args.env}",
backup_retention_period=30,
cluster_identifier=f"db-{args.env}",
database_name="main", # was "joe" in original RDS
engine="aurora-postgresql",
#master_password=aws.secretsmanager.get_secret_version(secret_id=secret.id),
#master_password=secret_version.secret_string,
#master_username=db_master_username,
preferred_backup_window="01:00-04:00",
preferred_maintenance_window="sat:05:00-sat:07:00",
iam_database_authentication_enabled=True,
engine_version=db_engine_version,
skip_final_snapshot=True,
snapshot_identifier=origin_snapshot.db_snapshot_arn,
vpc_security_group_ids=db_security_groups,
)
cluster_instances = []
for x in range(db_instance_count):
cluster_instances.append(
aws.rds.ClusterInstance(
f"{args.env}-db-{x}",
identifier=f"{args.env}-db-{x}",
cluster_identifier=postgresql_cluster.id,
instance_class=db_instance_class,
engine=postgresql_cluster.engine,
engine_version=postgresql_cluster.engine_version,
monitoring_interval=db_monitoring_interval,
performance_insights_enabled=db_perf_insights_enabled,
publicly_accessible=False,
monitoring_role_arn=rds_enhanced_monitoring_role.arn,
)
)
pulumi.export("db_primary_endpoint", postgresql_cluster.endpoint)
pulumi.export("db_reader_endpoint", postgresql_cluster.reader_endpoint)
pulumi.export("cluster_id", postgresql_cluster.id)
def manage():
secret = aws.secretsmanager.Secret(
f"{stack}-db-master-password"
)
secret_version = aws.secretsmanager.SecretVersion(
f"{stack}-db-master-password-version",
secret_id=secret.id,
#secret_string=random_password.result,
secret_string="s00pers33cr3t",
opts=pulumi.ResourceOptions(depends_on=[secret]),
)
# NOTE: we're using the cluster we created in bootstrap process here...it must exist or we get errors
postgresql_cluster = aws.rds.Cluster(
f"postgresql-{stack}",
cluster_identifier=f"db-{stack}",
database_name="main", # was "joe" in original RDS
engine="aurora-postgresql",
#master_password=aws.secretsmanager.get_secret_version(secret_id=secret.id),
master_password=secret_version.secret_string,
master_username=db_master_username,
skip_final_snapshot=True,
opts=pulumi.ResourceOptions(import_=pulumi.StackReference(f"{org}/database/{stack}").get_output("cluster_id"))
)
"""TODO: think the "manage database" path is working except for the fact that it attempts to use
the cluster created in the "bootstrap" phase and we haven't finished the bootstrap yet.
"""
set_stack_name = ( lambda function: bool(function == "bootstrap") and f"bootstrap-{args.env}" or args.env)
<http://log.info|log.info>(f"set_stack_name: {set_stack_name(args.function)}")
stack = auto.create_or_select_stack(stack_name=set_stack_name(args.function), project_name=project_name, program=args.function, work_dir=".")
if args.destroy:
<http://log.info|log.info>("destroying stack...")
stack.destroy(on_output=print)
<http://log.info|log.info>("stack destroy complete")
exit(0)
#<http://log.info|log.info>(f"<http://stack.info|stack.info>: {<http://stack.info|stack.info>()}")
#<http://log.info|log.info>(f"dir(stack): {dir(stack)}")
# This doesn't work...
# config = pulumi.Config()
# print(f"dir(config): {dir(config)}")
# ...
# get this error --> "pulumi.errors.RunError: Program run without the Pulumi engine available; re-run using the `pulumi` CLI"
stack.refresh(on_output=print)
stack.workspace.refresh_config("bootstrap-dev")
print(f"dir(stack.workspace): {dir(stack.workspace)}")
print(f"os.listdir(): {os.listdir()}")
print(f"<http://stack.info|stack.info>(): {<http://stack.info|stack.info>()}")
print(f"stack.name: {stack.name}")
print(f"stack.get_all_config(): {stack.get_all_config()}")
stack.workspace.refresh_config("bootstrap-dev")
# aws_account_number = config.require("aws_account_number")
# print(f"aws_account_number: {aws_account_number}")
#<http://log.info|log.info>(f"stack.get_all_config(): {stack.get_all_config()}")
#stack.up(on_output=print)
bored-oyster-3147
08/29/2022, 5:45 PMPulumi.Config
is meant to be used to access config from inside your pulumi program. So in your case it can only be used inside your bootstrap()
or manage()
function.
3. If you want to set config on the stack using Automation API, there should be stack.SetConfig
functions or something along those lines.
4. You are not using an "inline program" function currently because you are not passing either the bootstrap()
or manage()
function as the program
input on auto.create_or_select_stack
. That's where you set the pulumi program to be invokedargs.function
property. So maybe that is working for you but I'm not getting it from reading thisabundant-vegetable-12924
08/29/2022, 6:10 PMargs.function
evaluates to either manage
or bootstrap
....i'm probably the worst python practitioner you'll meet this week 😏bored-oyster-3147
08/29/2022, 6:21 PM"manage"
or "bootstrap"
and not seeing how that translates to correctly invoking manage()
and bootstrap()
but if that works for python than by all means. In C# you would need to actually pass the delegate.
re 1: these are the same database instances..."bootstrap" creates them initially from snapshot/backup and then my hope is that i can use stack refs to manipulate the database cluster going forward...i guess i'm just trying to fully automate that bit...super cool that pulumi supports those sorts of things (at least, i think it does)It may be the same database instance but it is not the same architecture. Stacks should be different instances of the same architecture. For instance if I have a
prod
cluster and I have a dev
cluster. All of the surround resources will be the same, but I just want to be able to duplicate it. So these should be different stacks of the same project.
In your scenario, your fundamental resources declared in your pulumi program are different. They should be different projects.
Is your goal that you will then abandon the bootstrap()
pulumi and manage()
will take care of that cluster from here on out?ClusterInstances
got lost along the way and are no longer managed.abundant-vegetable-12924
08/29/2022, 6:55 PMIs your goal that you will then abandon theyes...exactly...i actually got the idea from pulumi support (Mitch G.), but looking back through his message, i can see that i was wrong about my approach....he suggested, essentially, 1. do the creation within a given stack withpulumi andbootstrap()
will take care of that cluster from here on out? (edited)manage()
stack.up()
2. update the password with stack.set_config('new password')
3. do another stack.up()
...sooo, i guess that's why i was struggling...lesson learned, listen to Mitch 🙂
super helpful your walking me through that...i think i can probably run with it from here...thanks so much, again!bored-oyster-3147
08/29/2022, 7:08 PM