https://pulumi.com logo
Docs
Join the conversationJoin Slack
Channels
announcements
automation-api
aws
azure
blog-posts
built-with-pulumi
cloudengineering
cloudengineering-support
content-share
contribex
contribute
docs
dotnet
finops
general
getting-started
gitlab
golang
google-cloud
hackathon-03-19-2020
hacktoberfest
install
java
jobs
kubernetes
learn-pulumi-events
linen
localstack
multi-language-hackathon
office-hours
oracle-cloud-infrastructure
plugin-framework
pulumi-cdk
pulumi-crosscode
pulumi-deployments
pulumi-kubernetes-operator
pulumi-service
pulumiverse
python
registry
status
testingtesting123
testingtesting321
typescript
welcome
workshops
yaml
Powered by Linen
general
  • f

    full-dress-10026

    11/28/2018, 12:12 AM
    Ah: https://pulumi.io/reference/stack.html#create-stack
    The stack name must be unique within within your account. As a best practice, prefix the stack name with a project name.
    w
    • 2
    • 2
  • b

    busy-umbrella-36067

    11/28/2018, 4:42 AM
    Digging the added color support in the cli 😎
  • o

    orange-tailor-85423

    11/28/2018, 4:36 PM
    Those who are making good progress.... how are you breaking up your stacks and how many resources are you seeing in them? Trying to think ahead to testing changes and it would be nice to not up/down and entire stack to make this work. However, realizing that many of these cloud components have pre-reqs, I'd be curious about your strategies.
  • f

    full-dress-10026

    11/28/2018, 6:04 PM
    I have two Pulumi resources, call them
    pA
    and
    pB
    , that need to be deployed. Before
    pB
    can be deployed, I have some custom CLI stuff that needs to be run that depends on
    pA
    outputs. I understand that right now it's not possible to hook into the lifecycle, and that it will likely be added as a feature in the future. As a workaround for now, I was told I could run the CLI stuff in an
    apply
    that contains
    pA
    outputs. How would I ensure that
    pB
    does not start provisioning until the CLI stuff running in the
    apply
    completes without exception?
    m
    l
    • 3
    • 39
  • f

    full-dress-10026

    11/28/2018, 6:14 PM
    Maybe it'd be better to separate the two resources into two different projects?
  • f

    full-dress-10026

    11/28/2018, 6:40 PM
    What Pulumi version contains
    pulumi.StackReference
    ?
    w
    m
    • 3
    • 3
  • o

    orange-tailor-85423

    11/28/2018, 6:47 PM
    Am I the only one that thinks Google Groups is an atrocity? It's like the design and usability hasn't updated in a decade
    g
    • 2
    • 1
  • b

    blue-dentist-627

    11/29/2018, 2:40 AM
    hi, i need to import a env config with a array of json but seems than the pulumi config doesnt have support for array, what's the best way to do it ?
    c
    • 2
    • 3
  • a

    adamant-restaurant-73893

    11/29/2018, 7:33 PM
    New blog on integrating Epsagon's serverless monitoring with Pulumi from @white-balloon-205 https://blog.pulumi.com/pulumi-and-epsagon-define-deploy-and-monitor-serverless-applications
  • b

    big-caravan-87850

    11/29/2018, 8:00 PM
    AWS re:invent just announced Amazon Managed Streaming for Kafka (MSK). its currently in public preview. Is there a timeline to support this service?
    f
    w
    • 3
    • 3
  • f

    full-dress-10026

    11/29/2018, 8:18 PM
    Is it possible to modify a security group created from a non-pulumi service with Pulumi given its ID?
    w
    • 2
    • 1
  • e

    early-musician-41645

    11/29/2018, 8:33 PM
    Help, I'm stuck in a weird state:
    Resources:
        + 4 to create
        6 unchanged
    
    Do you want to perform this update? yes
    Updating (s3-object-indexer-dev):
    error: [409] Conflict: Another update is currently in progress.
    How do I get out of the in-progress update?
    w
    • 2
    • 2
  • e

    early-musician-41645

    11/29/2018, 10:12 PM
    I've run through the sample to get an
    onPut
    action in an S3 bucket to trigger a lambda using this code:
    // A storage bucket
    const bucket = new cloud.Bucket("eshamay-test-bucket");
    const bucketName = bucket.bucket.id;
    
    // Trigger a Lamda function when something is added
    bucket.onPut("onNewObject", (bucketArgs: any) => {
        console.log(`*** New Item in Bucket`);
        console.log(bucketArgs);
    });
    However, I plan to trigger the lambda from multiple buckets and will create/update some DynamoDB items to index the contents of the various bucket objects across multiple regions. Currently, the lambda above outputs something like this (taken directly from CloudWatch logs:
    2018-11-29T12:52:08.159-08:00[onNewObject] { key: 'some/foo/path/myfile.ts',  size: 384,  eventTime: '2018-11-29T20:52:07.166Z' }
    However, my indexing will need to create a couple extra attributes to track region, bucket name, and perhaps some other info of the
    put
    such as the Principle ID, IP address, etc. I know that S3 events emit something like this: https://docs.aws.amazon.com/lambda/latest/dg/eventsources.html#eventsources-s3-put Is there a way to get access to that original event object? Also, is there a way to create a single lambda to do the DynamoDB update, and then create several buckets across regions that trigger that single lambda
    onPut
    ?
    l
    m
    • 3
    • 49
  • m

    microscopic-florist-22719

    11/29/2018, 10:13 PM
    cc @lemon-spoon-91807 @white-balloon-205
  • e

    early-musician-41645

    11/30/2018, 12:24 AM
    How can I manage AWS resources in a different region than one set in the config? I tried to get an existing S3 bucket like this:
    const bucket = aws.s3.Bucket.get('prod-us-west-1',
        'sdp-tsm-prod-uw1-artifacts',
        { region: 'us-west-1' });
    But the configured provider is in us-west-2, hence:
    Diagnostics:
      aws:s3:Bucket (prod-us-west-1):
        error: Preview failed: refreshing urn:pulumi:s3-object-indexer-dev::s3-object-indexer::aws:s3/bucket:Bucket::prod-us-west-1: error reading S3 Bucket (sdp-tsm-prod-uw1-artifacts): BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region
            status code: 301, request id: , host id:
    I tried to create a new regional provider but couldn't figure out how to plug it in:
    let uw1Provider = new aws.Provider("us-west-1-aws-provider", { region: 'us-west-1' });
  • s

    stocky-spoon-28903

    11/30/2018, 12:53 AM
    The configuration of the provider looks correct - you could be able to pass
    { provider: uw1Provider }
    to each resource that needs to use it
  • m

    microscopic-florist-22719

    11/30/2018, 12:53 AM
    looks like we don't codegen that for
    get
  • s

    stocky-spoon-28903

    11/30/2018, 12:53 AM
    In place of
    { region: "us-west-1" }
    in your example
  • s

    stocky-spoon-28903

    11/30/2018, 12:53 AM
    Ah, for get probably not
  • m

    microscopic-florist-22719

    11/30/2018, 12:54 AM
    as a workaround, you can do
    new aws.s3.Bucket('prod-us-west-1', {}, { id: 'sdp-tsm-prod-uw1-artifacts', provider: uw1Provider })
    e
    l
    • 3
    • 88
  • s

    stocky-spoon-28903

    11/30/2018, 12:55 AM
    Hmm, depending on what you need to do you might be able to use the
    getBucket
    data source invoke, that has the option to pass in a provider
  • s

    stocky-spoon-28903

    11/30/2018, 12:55 AM
    In the meantime at least
  • m

    microscopic-florist-22719

    11/30/2018, 12:56 AM
    Filed https://github.com/pulumi/pulumi-terraform/issues/289 to track a fix.
  • f

    full-dress-10026

    11/30/2018, 1:12 AM
    Is there a way to improve the output of Pulumi on CI? Right now it is pretty much an unreadable mess of ""s.
    c
    l
    w
    • 4
    • 23
  • f

    full-dress-10026

    11/30/2018, 3:07 AM
    The
    DockerBuild.cacheFrom
    docs say this:
    /**
     * An optional CacheFrom object with information about the build stages to use for the Docker
     * build cache. This parameter maps to the --cache-from argument to the Docker CLI. If this
     * parameter is `true`, only the final image will be pulled and passed to --cache-from; if it is
     * a CacheFrom object, the stages named therein will also be pulled and passed to --cache-from.
     */
    What does this mean "If this parameter is
    true
    , only the final image will be pulled and passed to --cache-from"?
    w
    l
    • 3
    • 14
  • e

    early-musician-41645

    11/30/2018, 4:53 AM
    I did a
    get
    on an S3 bucket via the suggested code and some resources from my project were created, and some failed. To get to the previous state I thought I'd do a
    pulumi destroy
    but I see that the bucket I got is included in the delete:
    Previewing destroy (s3-object-indexer-dev):
    
         Type                                Name                                              Plan
     -   pulumi:pulumi:Stack                 s3-object-indexer-s3-object-indexer-dev           delete
     -   β”œβ”€ aws:s3:BucketEventSubscription   sdp-tsm-s3-artifact-indexer-onNewObject           delete
     -   β”‚  β”œβ”€ aws:lambda:Permission         sdp-tsm-s3-artifact-indexer-onNewObject           delete
     -   β”‚  β”œβ”€ aws:iam:RolePolicyAttachment  sdp-tsm-s3-artifact-indexer-onNewObject-32be53a2  delete
     -   β”‚  β”œβ”€ aws:lambda:Function           sdp-tsm-s3-artifact-indexer-onNewObject           delete
     -   β”‚  └─ aws:iam:Role                  sdp-tsm-s3-artifact-indexer-onNewObject           delete
     -   β”œβ”€ aws:s3:Bucket                    prod-us-west-1                                    delete
     -   └─ pulumi:providers:aws             us-west-1-aws-provider                            delete
    Is this expected behavior?
  • e

    early-musician-41645

    11/30/2018, 4:53 AM
    Is everything that I
    .get
    added in as a tracked resource?
  • e

    early-musician-41645

    11/30/2018, 4:54 AM
    I was hoping it would remain untracked because I'm not the owner of that particular resource
    w
    • 2
    • 4
  • t

    tall-monitor-77779

    11/30/2018, 2:14 PM
    guys, any examples of unit testing typescript projects with jest? (or any js testing framework for that matter)
    w
    • 2
    • 2
  • f

    faint-motherboard-95438

    11/30/2018, 3:52 PM
    Hello here, I could use some help on creating a gcp cluster with pulumi. I started with the example from here https://github.com/pulumi/examples/blob/master/gcp-ts-gke/cluster.ts but I noticed it was missing some mandatory parameters.
    error: Plan apply failed: project: required field is not set
    I found out that I can set it in the stack
    gcp:project
    along with
    gcp:zone
    and
    gcp:region
    but that feels wrong and error prone to duplicate it here since I already have all of that set up in my
    gcloud config
    , is there any way it can automatically detects the local active gcloud config ? But anyway, even if I fill these values up in the stack to test it, I got an error :
    error: Plan apply failed: googleapi: Error 403: Required "container.clusters.create" permission(s) for "projects/[...]". See <https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted> for more info., forbidden
    while I have all the needed permissions to create cluster and had no issue so far to create anything else with pulumi.
    w
    • 2
    • 2
Powered by Linen
Title
f

faint-motherboard-95438

11/30/2018, 3:52 PM
Hello here, I could use some help on creating a gcp cluster with pulumi. I started with the example from here https://github.com/pulumi/examples/blob/master/gcp-ts-gke/cluster.ts but I noticed it was missing some mandatory parameters.
error: Plan apply failed: project: required field is not set
I found out that I can set it in the stack
gcp:project
along with
gcp:zone
and
gcp:region
but that feels wrong and error prone to duplicate it here since I already have all of that set up in my
gcloud config
, is there any way it can automatically detects the local active gcloud config ? But anyway, even if I fill these values up in the stack to test it, I got an error :
error: Plan apply failed: googleapi: Error 403: Required "container.clusters.create" permission(s) for "projects/[...]". See <https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted> for more info., forbidden
while I have all the needed permissions to create cluster and had no issue so far to create anything else with pulumi.
w

white-balloon-205

11/30/2018, 4:19 PM
The
project
value should also be picked up ambiently from any of the following ENV vars:
"GOOGLE_PROJECT",
"GOOGLE_CLOUD_PROJECT",
"GCLOUD_PROJECT",
"CLOUDSDK_CORE_PROJECT",
Honestly not sure why Google has decided not to pick this up directly from
gcloud config
as well - but it seems to be an intentional choice by Google engineers working on the Google Terraform Provider. Regarding the error - this is the same as reported here: https://github.com/pulumi/examples/issues/150. I have tried many times to reproduce that myself, but have been unable to. I feel reasonably sure that some combination of the credentials and projects being used must not be correct (and GCP error messages here are unfortunately not too helpful), but would love to get to the bottom of this. I can't figure out any way Pulumi could be related to these errors - but it's certainly possible. If you have any more details you can share on specific user/role/project configuration you are using - could you add it to the issue linked above?
f

faint-motherboard-95438

11/30/2018, 5:01 PM
hey @white-balloon-205 thanks for your answer. I’m a bit disappointed by what you are reporting about the choices Google engineers made, but that’s making it clear, it seems I don’t have a choice here. Thanks for the link to the issue, I will follow up anything I can get on this one in it. Indeed that shouldn’t be a pulumi specific, I’ll let you know if I find something
View count: 1