https://pulumi.com logo
Docs
Join the conversationJoin Slack
Channels
announcements
automation-api
aws
azure
blog-posts
built-with-pulumi
cloudengineering
cloudengineering-support
content-share
contribex
contribute
docs
dotnet
finops
general
getting-started
gitlab
golang
google-cloud
hackathon-03-19-2020
hacktoberfest
install
java
jobs
kubernetes
learn-pulumi-events
linen
localstack
multi-language-hackathon
office-hours
oracle-cloud-infrastructure
plugin-framework
pulumi-cdk
pulumi-crosscode
pulumi-deployments
pulumi-kubernetes-operator
pulumi-service
pulumiverse
python
registry
status
testingtesting123
testingtesting321
typescript
welcome
workshops
yaml
Powered by Linen
general
  • h

    happy-egg-47291

    10/05/2018, 5:51 PM
    Can you use a manually created
    aws.lambda.Function
    object with a cloud.API endpoint?
    w
    l
    • 3
    • 12
  • b

    busy-umbrella-36067

    10/05/2018, 8:06 PM
    im getting this error when using a specific stack.
    $ pulumi preview
    
    Previewing update of stack 'XXXXXXXX/XXXXXXX-dev'
    error: could not find plugin for provider 'XXXXXXXX-dev::XXXXXXX::pulumi:providers:kubernetes::default'
  • b

    busy-umbrella-36067

    10/05/2018, 8:09 PM
    looks like it was bumped a version higher than the other envs
  • b

    busy-umbrella-36067

    10/05/2018, 8:09 PM
    is the provider version specific?
  • c

    creamy-potato-29402

    10/05/2018, 8:10 PM
    Yes.
  • c

    creamy-potato-29402

    10/05/2018, 8:11 PM
    You must npm install that version
  • f

    full-dress-10026

    10/05/2018, 8:56 PM
    I performed a parameter update on a cloudformation stack, the update failed, and the
    pulumi update
    did not fail.
    w
    • 2
    • 2
  • f

    full-dress-10026

    10/05/2018, 8:59 PM
    If I pass a
    name
    to a
    new aws.cloudformation.Stack
    and a CF stack with that name exists, will it use that CF stack?
    w
    • 2
    • 7
  • d

    dazzling-scientist-80826

    10/05/2018, 11:33 PM
    i think i mucked up my snapshot file somehow. i’m getting an error:
    failed to verify snapshot: resource  ....  refers to unknown provider ...
    - but as far as i can tell, the provider is defined in the snapshot
  • d

    dazzling-scientist-80826

    10/05/2018, 11:33 PM
    i was trying to move a resource between aws regions by changing the provider and things went haywire & i haven’t been able to recover & fear i may have made matters worse messing around
  • d

    dazzling-scientist-80826

    10/05/2018, 11:33 PM
    any suggestions on how to go about fixing it?
  • d

    dazzling-scientist-80826

    10/05/2018, 11:39 PM
    ooook - i just moved the provider resource higher up in the snapshot file & that fixed it
  • d

    dazzling-scientist-80826

    10/05/2018, 11:39 PM
    https://github.com/pulumi/pulumi/blob/491bcdc602470f9c8088319e6fad6a71d2c97972/pkg/resource/deploy/snapshot.go#L102
  • d

    dazzling-scientist-80826

    10/05/2018, 11:39 PM
    surrounding code seems to interleave indexing providers & derefencing them, so they need to be topologically sorted
  • b

    big-piano-35669

    10/05/2018, 11:42 PM
    Yeah we originally didn't encode dependencies explicitly so we couldn't topsort things ourselves. So we assume the file is ordered. @incalculable-sundown-82514 now that we capture dependencies, should we revisit this? It's definitely subtle!
  • d

    dazzling-scientist-80826

    10/05/2018, 11:48 PM
    still not quite back to a stable state - seems like one last issue
  • d

    dazzling-scientist-80826

    10/05/2018, 11:48 PM
    i’ve got a pending delete that i want to discard
    b
    • 2
    • 1
  • d

    dazzling-scientist-80826

    10/05/2018, 11:48 PM
    i removed the entry from the pending_operations, but that seemed to do nothing
  • d

    dazzling-scientist-80826

    10/05/2018, 11:49 PM
    then i removed the
    "delete": true
    and got an assertion failure:
  • d

    dazzling-scientist-80826

    10/05/2018, 11:49 PM
    https://github.com/pulumi/pulumi/issues/2030
  • d

    dazzling-scientist-80826

    10/05/2018, 11:52 PM
    aaaah looks like i have a duplicate resource in the snapshot file
  • d

    dazzling-scientist-80826

    10/05/2018, 11:52 PM
    probably a hand-edit screw up
  • d

    dazzling-scientist-80826

    10/05/2018, 11:54 PM
    pulumi update
    is finally successful again, but i’ll admit, i have zero confidence that this snapshot file matches reality 😛
    🤞 1
    b
    w
    • 3
    • 4
  • b

    brave-angle-33257

    10/06/2018, 12:16 AM
    hi guys, brand brand new to pulumi, just installed it setting up first app.. having trouble conceptualizing how the project/stacks should be organized. is it a separation thing between stacks like database vs server (persistent vs ephemeral) or is it the entire application infra prod/dev type thing, any good images or docs on this topic?
    b
    • 2
    • 9
  • g

    gifted-island-55702

    10/06/2018, 8:13 PM
    Hello! I am learning Pulumi and I am trying to find if it’s possible to share some data between stacks. Is it possible? And to avoid XY problem, I would like to have one stack that defines resources like GCP Project and Google Kubernetes Engine cluster in that project and another stack that would deploy some K8S resources to that GKE cluster (but would also need some more GCP resources like static IP addresses and DNS records). I have a single cluster where I would like to have multiple teams deploying their apps to GKE. Is there a guide how stacks should be structured/designed for bigger deployments/projects?
    g
    q
    +2
    • 5
    • 5
  • g

    glamorous-printer-66548

    10/07/2018, 12:40 AM
    @gifted-island-55702 @quiet-wolf-18467 coming back to your question earlier: So we use GCP and GKE in my company too. We run about 15-20 micro-services / apps, mostly written in nodejs, some in Python and Java, on that stack. Just a small warning: This whole GCP / GKE / pulumi stuff is currently not in production, but we’re able to to spin up entire dev environments and plan to bring this to production in the next months. In general our pulumi code structure is like this: 1. we have a single
    infrastructure-shared
    repository which contains a few pulumi programs which setup the base infrastructure that is shared by many apps. this includes for example: GCP projects, GKE clusters, cluster-wide addons (i.e. cert-manager, kube external-dns). This repo contains multiple pulumi programs for different gcp projects and each of them usually just a single stack instance. They share a few parameters / constants which are defined as simple JS / TS files under a
    common
    directory at the project root. 2. we have a single
    pulumi-util
    repo which contains an internal reuse npm library (important: this is just a library, not a pulumi program itself) that codifies a couple of common pulumi patterns. E.g. it contains a common dockerfile to build a nodejs app, it contains a
    SolvvyApp
    (solvvy is the name of my company) class which creates a docker Image, Deployment and depending on the configuration also a Service or Ingress resource. This is basically a much enhanced version of https://github.com/pulumi/examples/blob/master/kubernetes-ts-guestbook/components/k8sjs.ts#L9 . It also contains a few utilities to deal with GKE particularly. 3. We have multiple application repositories (7 or so) which contain the application code for those aforementioned 15-20 services (some of those repos are a mono-repo, so contain multiple services). In each of those app repos we have a subdirectory
    infra
    which makes heavy use of our own pulumi-util library to build and deploy the app to the clusters that have been setup prior to that via the
    infrastructure-shared
    repository. One important side note is that we even setup some core infrastructure like redis and rabbitmq inside the app repositories when they are not shared by most of the apps (which is the case for redis and rabbitmq which are really only used by one application each). Typically the code inside each app repository is very brief because it reuses abstractions from our pulumi-util library. E.g. this is how the code in one of our apps
    infra
    directory:
    import { SolvvyApp } from '@solvvy/pulumi-util';
    
    const app = new SolvvyApp({
      buildContext: __dirname + '/../',
      service: {
        internalHttpPort: 1337,
        expose: 'VPCInternal'
      },
      env: {
        NODE_ENV: 'k8s'
      }
    });
    
    export const url = app.url;
    This little code builds a docker image, pushes it to gcr, creates a k8s service of type load balancer which creates a GCP internal TCP load balancer, assigns a dns entry via kube external-dns. etc. The application name is inferred by reading the package.json under buildContext (solvvyapp contains that logic) and the environment name (which gets translated to a namespace) is inferred from the last segment of the stack name (the string after the last ‘-’ in the stack name).
  • g

    glamorous-printer-66548

    10/07/2018, 12:40 AM
    Now you may be wondering how the target gke cluster is determined: Every app has to be have two gcp / gke config entries in their stack config: The target gcp-project and the name of the target-cluster. E.g. it may look like this:
    config:
      gcp:project: my-gcp-project
      '@solvvy/pulumi-util:cluster': my-gke-cluster
    This information is used by the pulumi-util library to dynamically create a pulumi k8s provider onto which the application will be deployed. Concretely the following code is used to create a k8s provider from the the cluster name: https://gist.github.com/geekflyer/b78adab2667d8526a1dd593bc5c844bf#file-gke-ts SolvvyApp under the hood simply calls
    getK8sProviderFromInferredCluster()
    (https://gist.github.com/geekflyer/b78adab2667d8526a1dd593bc5c844bf#file-gke-ts-L29) to get a k8s provider. Those gke utilities basically make use of the
    getCluster
    function of pulumi https://pulumi.io/reference/pkg/nodejs/@pulumi/gcp/container/#getCluster to read in some stuff of existing cloud resources (that have been created by another program / stack). In general pulumi has a lot of those
    get<Something>
    functions to read in parameters of some cloud resource that has been defined elsewhere. I’m honestly not sure how one can use stack outputs from one pulumi program as inputs to another (without manual copy and paste) and It’d be curious on some example for this too (@big-piano-35669?)
  • g

    glamorous-printer-66548

    10/07/2018, 12:50 AM
    And as for authentication: We basically rely on the developer having gcloud installed and having the right IAM permissions to do whatever he has to do. In CI we also use gcloud via a service account that we activate via
    gcloud auth activate-service-account --project <project_of_service_account> --key-file=-
  • g

    glamorous-printer-66548

    10/07/2018, 12:59 AM
    Many of the things of our pulumi-util library are actually pretty generalizable to other GCP + GKE users and once this library is more stable we may also open source parts of it.
  • g

    glamorous-printer-66548

    10/07/2018, 2:02 AM
    Small bonus - this is an internal post I wrote to explain the philosophy behind this project structure:
    At Solvvy we use pulumi to deploy and manage our infrastructure E2E.
    Generally speaking we can divide our infrastructure into two kinds of “Resources”:
    1. Low Level and Shared Resources, i.e. VMs, Kubernetes Clusters, Cluster-Wide services or addons (i.e. gitlab-runner, cert-manager, external-dns)
    2. App Resources, i.e. for a service like the accounts-api. Typically each app is comprised of resources like a docker image and runtime container, some networking config, a service account, possibly a DNS name and optionally some app specific backing services, i.e. rabbitmq or redis.
    Each application may have unique requirements for infrastructure, use unique libraries / frameworks and have an independent deployment lifecycle. Therefore we try to build a model where the app-specific infrastructure code (i.e. rabbitmq for app1, redis for app2) is held in the same location as the application, concretely in the repository of the application itself (i.e. github repo of app1). Only resources which are very common or shared between apps by technical necessity are defined in a central infrastructure-only repository (i.e. a kubernetes cluster). One advantage of this code organization is that in order to introduce a new app / service (or fit the infrastructure to new requirements of a drastically changed app), one has to only create a single Pull Request / Commit in the app repository and not an additional dependent PR in the shared infra repo. Philosophically speaking the idea behind putting infrastructure code into the app repo is to empower developers to develop and run their application end-to-end, delivering on the original goals of a “DevOps” culture vs. relying on strict division of labor and inefficient dependencies between Dev ←→ Ops in the old world. Pulumi and kubernetes are two very important ingredients to achieve this - they basically provide a common, approachable platform for consuming, building and sharing abstractions on top of low level infrastructure. I.e. Pulumi uses a general purpose language like TypeScript and npm libraries for defining infrastructure (vs an infrastructure-only DSL like Terraform’s Hashicorp Configuration Language) and with kubernetes you don’t need to be a bash + tmux superhero just to see the logs of all your app instances. This makes creating and managing infrastructure much more approachable for the average developer.
    q
    b
    • 3
    • 99
Powered by Linen
Title
g

glamorous-printer-66548

10/07/2018, 2:02 AM
Small bonus - this is an internal post I wrote to explain the philosophy behind this project structure:
At Solvvy we use pulumi to deploy and manage our infrastructure E2E.
Generally speaking we can divide our infrastructure into two kinds of “Resources”:
1. Low Level and Shared Resources, i.e. VMs, Kubernetes Clusters, Cluster-Wide services or addons (i.e. gitlab-runner, cert-manager, external-dns)
2. App Resources, i.e. for a service like the accounts-api. Typically each app is comprised of resources like a docker image and runtime container, some networking config, a service account, possibly a DNS name and optionally some app specific backing services, i.e. rabbitmq or redis.
Each application may have unique requirements for infrastructure, use unique libraries / frameworks and have an independent deployment lifecycle. Therefore we try to build a model where the app-specific infrastructure code (i.e. rabbitmq for app1, redis for app2) is held in the same location as the application, concretely in the repository of the application itself (i.e. github repo of app1). Only resources which are very common or shared between apps by technical necessity are defined in a central infrastructure-only repository (i.e. a kubernetes cluster). One advantage of this code organization is that in order to introduce a new app / service (or fit the infrastructure to new requirements of a drastically changed app), one has to only create a single Pull Request / Commit in the app repository and not an additional dependent PR in the shared infra repo. Philosophically speaking the idea behind putting infrastructure code into the app repo is to empower developers to develop and run their application end-to-end, delivering on the original goals of a “DevOps” culture vs. relying on strict division of labor and inefficient dependencies between Dev ←→ Ops in the old world. Pulumi and kubernetes are two very important ingredients to achieve this - they basically provide a common, approachable platform for consuming, building and sharing abstractions on top of low level infrastructure. I.e. Pulumi uses a general purpose language like TypeScript and npm libraries for defining infrastructure (vs an infrastructure-only DSL like Terraform’s Hashicorp Configuration Language) and with kubernetes you don’t need to be a bash + tmux superhero just to see the logs of all your app instances. This makes creating and managing infrastructure much more approachable for the average developer.
q

quiet-wolf-18467

10/07/2018, 11:31 AM
@glamorous-printer-66548 thanks for all that. I'm curious, how do you deploy a new environment? Do you trigger a build of each of the smaller services? My challenge right now is I want each service to be independently deployable, but for disaster recovery; I want a single deploy everything action too. Is that possible with your setup?
b

brave-angle-33257

10/07/2018, 5:14 PM
im evalutating pulumi for similar multi-layered approach where i want to have data in a layer, vpc in a layer etc.. so far I'm leaning towards full deployments (and really all deployments) using make targets where it will scrape the outputs of another project and then import them into the config of another, something like (obvious psuedocode):
make deploy-compute:
   cd ${PWD}/network
   NETWORK=$(pulumi output)
   cd ${PWD}/data
   DATA=$(pulumi output)
   cd ${PWD}/compute
   pulumi config set ${DATA}
   pulumi config set ${NETWORK}
   pulumi update
make deploy-all:
   cd ${PWD}/network
   pulumi update
   NETWORK=$(pulumi output)
   cd ${PWD}/data
   pulumi config set {$NETWORK}
   pulumi update
   DATA=$(pulumi output)
   cd ${PWD}/compute
   pulumi config set ${DATA}
   pulumi config set ${NETWORK}
   pulumi update
no idea if it will work properly, but from what I see that's more or less how you'd have to do it.. would love to hear other opinions
g

glamorous-printer-66548

10/07/2018, 9:49 PM
@quiet-wolf-18467 yes to create a new environment we have to checkout the repo and trigger a build of each of the smaller services (although some services live in the same stack, so it’s not like I have to trigger 15-20 builds, but rather 8 or so). Currently this is manual but in a few days / weeks I will probably create some script which does that in one shot. What @brave-angle-33257 suggested actually reminds me of terragrunt which is a CLI wrapper around terraform and allows running operations in multiple terraform modules at once (e.g.
terragrunt apply-all
) and respect the dependencies between them for ordering: https://github.com/gruntwork-io/terragrunt#the-apply-all-destroy-all-output-all-and-plan-all-commands . I wonder if pulumi could introduce something similar @creamy-potato-29402 ? Should I open an issue? In general currently I minimize the need to pass around outputs from one stack to another by simply using predictable resource identifiers (which is easy in GCP), shared TS libs / modules / constants and getting other data like the IP addresses of a GKE cluster, by using pulumi’s
get<Resource>
functions (e.g.
getCluster
). Terraform btw also has the ability to query the state of another module directly https://www.terraform.io/docs/providers/terraform/d/remote_state.html#example-usage which is another way of sharing config between stacks. Maybe worth another pulumi feature request? 🙂
b

brave-angle-33257

10/08/2018, 2:07 AM
Haven't seen terragrunt before, looks interesting. I feel that in my opinion, a better way maybe to arrange things would be have the folder name be the project, then next level would be maybe a selectable "deployment" like dev/prod/qa.. (similar to how the stacks work now) and then the code inside would be stacks, a stack could be really any organzier structure to place resources, more like how cloudformation does things. I might make an "rds" stack or an "rds-api" and "rds-app" stacks, and in there place one or more rds resources. That stack would be named under pulumi as project-env-rds-app. Something like:
$ pulumi select deployment prod
$ pulumi stacks
pulumi:
   mycompany-prod-network
   mycompany-prod-security-group
   mycompany-prod-rds-api
   mycompany-prod-rds-app
   mycompany-prod-asg-app
   mycompany-prod-lambda
$ pulumi stack detail rds-app
   aws:rds:postgres/AppTracking
   aws:rds:mysql/AppUser
$ pulumi stack detail lambda
   aws:lambda:function/Login
   aws:lambda:function/Logout
   aws:iam:role/LoginRole
   aws:iam:role/LogoutRole
Furthermore, I could maybe create a group, and using the upcoming RBAC control allow users to deploy that group as one unit, or any of the individual items inside, like an app-compute group, that might be ASG and Lambda.
$ pulumi select deployment prod
$ pulumi create group app-compute --stack asg-app --stack lambda
$ pulumi group detail app-compute
  mycompany-prod-asg-app
  mycompany-prod-lambda
$ pulumi group update app-compute
(updating)...
I realize this is pretty different than what they have currently, but to me it makes a lot of sense for how I've been arranging things in the past using some internal tools. I'm going to spend some time seeing how feasible my
make
approach would work.
g

glamorous-printer-66548

10/08/2018, 2:53 AM
idk, i’m not sure if I fully understand your proposed project structure (i’m also more familiar with GCP+K8s than raw AWS) but judging from your example I feel you’re over-engineering this a bit / creating too many “micro” stacks which have a lot of dependencies to other stacks. In general I’m also a fan of organizing / grouping things by their business function, rather than technology. In my project structure I’m aiming to manage the things that are specific to an app right along the application code and basically have one app-stack per environment. An app stack is for example named
myawesomeapp-staging
and can include various things like: container image, k8s Service, Deployment, IAM service accounts and the role bindings to that account, backing services like the db, redis cache (if not shared by other apps), a supporting cloud function (lambda), alerting rules etc. To create or remove a new app I just want to touch one stack and not changing 5/6 stacks and then be super careful in which order I have to deploy them etc.. This is just my personal opinion, your project structure may work very well too, but as stated above my goal is to give developers a lot of control and responsibility about their application AND the application-specific infrastructure and not become the ops / devops gatekeeper. In fact I had wednesday an internal pulumi demo in front of some team members incl. the CTO and now regular developers already start modifying their app specific pulumi code on their own which signals to me that this works.
oh but as for using different folders for different environments: I generally like that idea in some cases too, that’s why I requested this feature: https://github.com/pulumi/pulumi/issues/2014
regarding the RBAC control: I think to really prevent people from making changes in some stack or a part of a stack you just need to limit their AWS IAM permissions. The RBAC control for pulumi just controls access to the pulumi checkpoint/state storage and not the AWS infrastructure.
b

brave-angle-33257

10/08/2018, 4:03 AM
Great, thanks for the info and the issue link. Yea, I was sort of thinking outloud, I think the project approach is still better. The separate projects could be complete repos that we could control access to, built off components/packages in other repos and give developers access to those, similar to my "group" suggestion. I'm from an AWS background, but actually working with Azure now. Was hoping to use Python, but seeing that the support there is lacking a bit, and based on 2.7 which is an older version. Also, no k8s (planning to use also) in python, so possible I'd have to go TS on everything. I appreciate the feedback, I think Pulumi is a very exciting tool, I'm just not sure yet it's mature enough for what I need. I opened an issue on the pulumi-azure just now, I plan to keep on evaluating it for the time being. My issue tonight: https://github.com/pulumi/pulumi-azure/issues/131
g

glamorous-printer-66548

10/08/2018, 4:23 AM
Yeah I’m myself actually a huge TypeScript fan and use it regularly since 2 years (and convinced teams in both companies I worked for in those 2 years to embrace it), so being able to write infra code in TypeScript was a huge pro for me. I would probably also prefer it over Python 3.7 for this task. I’m a big fan of static typing for non-trivial projects as it serves as implicit documentation and ensures correct usage of reuse libraries by other team members. While I’ve used the type annotation syntax in python 3.7 a bit I’m under the impression that TypeScripts type system is more powerful+mature and the tooling makes better use of the types. If you’re looking for a more mature pulumi alternative, I think you gotta stay with terraform, but writing HCL boilerplate code is not exactly exciting and be warned that terraforms kubernetes support is actually way behind pulumi’s. Frankly speaking the terraform kubernetes provider in it’s current state is useless in practice as it doesn’t support k8s Deployments and a few other essentials. This issue contains some details about this: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/3#issuecomment-407118100 but the short gist is that HashiCorp apparently doesn’t have resources to maintain the k8s provider and they’re trying to hire someone to maintain it but until then you can’t expect a lot of progress.
b

brave-angle-33257

10/08/2018, 4:31 AM
Ah, very good to know about that, thank you. Right now we have a huge house-of-cards TF infra that I've inherited. I was looking at that terragrunt also, it seems to have some solutions for making the TF more DRY, right now we have separate .tf files for basically the same stuff in all regions. There are a couple modules in use, but the rest is 95% similar code everywhere with some global files that handle locals and shared config. It's unmanageable in this state, and very error-prone. I've used a tool in the past similar to scepter (but private inhouse) which is python+troposphere+cloudformation with stack/environment organization structure plus embedded deployer, diff/change auditing etc, so I know it's solid to write infra with a true programming language using modules and such. I'm trying to put together a POC for the team with the python (not super familiar with TS, but have done JS in past) so that might be a big ask for them to use TS, but our new k8s lead I'm going to have them watch the Pulumi k8s videos and gauge temp of interest there. I don't think using the TS SDK would be a deal breaker.
g

glamorous-printer-66548

10/08/2018, 4:43 AM
ok sounds good. As for TS: People are always hesitant to learn a new language. The thing is you need to make yourself and the team aware that TS is not really completely new language. It’s really just JavaScript with a few extras that you don’t even see in many TypeScript files. Many things you see in TypeScript files, like async functions, classes, module imports / exports via
import .. from
, export, arrow functions etc. is just official JavaScript (its in the spec) that has been introduced in the JS language over the last couple of years and which is worth to learn anyways if you’re dealing somewhere with JS. When I first poached TS in my team some people were scared and argued that TypeScript is so different and “non-standard” compared to JS. I told and showed them then that almost all the code I’ve shown just uses standard modern JS syntax features that they should learn anyways about unless they’re just targeting IE10 …
b

brave-angle-33257

10/08/2018, 4:46 AM
the developers are using NodeJS, so they're probably definitely more familiar with the fancy side of JS than I am.. will mostly be me (infra) and our k8s engineer. But, I'm not that worried, tons of examples and help around the office 👍
👍 2
Hey @glamorous-printer-66548 if you're around tomorrow, would you mind if I asked you a couple quick questions on your TS configuration? I got my python POC working well, and I think we may be able to move forward, but I'm having the damndest time doing simple things with package installations into the TS version script.. so far I'm just trying to get in a yaml loader and a sprintf() type operator. In my VS code they're working, but when i run
pulumi update
it's erroring with missing packages. I'd just like to make sure there's not something extra I need in my tsconfig.json or similar.
g

glamorous-printer-66548

10/09/2018, 12:55 AM
yeah sure. I’m usually just online in the afternoon and early evening though.
fyi, I’m not sure why you would wanna use a sprintf type operator. It’d just use JS template literals https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals and pulumi.log or console.log
b

brave-angle-33257

10/09/2018, 2:03 AM
cool, thanks, ill take a look.. basically i want to use that (but I'll check yours) and also like a yaml package.. I add
"dependencies": {
        "@pulumi/azure": "latest",
        "@pulumi/pulumi": "latest",
        "@types/js-yaml": "latest"
    }
to the package.json, do a
npm install
and then in my IDE it shows the package as available, but when i run via
pulumi update
it says not found.. it's probably something stupid but so far kicking my butt

https://s3-us-west-2.amazonaws.com/billeci-screenshots/index.ts__Untitled_Workspace_2018-10-08_19-04-05.png▾

https://s3-us-west-2.amazonaws.com/billeci-screenshots/Development__Development__tmux_new_-s_pulumi__20567_2018-10-08_19-05-18.png▾

g

glamorous-printer-66548

10/09/2018, 2:20 AM
pro tip 1: don’t put
latest
in your package.json. Simply run
npm install <package_name>
which will generate an entry in your package.json with the latest version. I usually also set
npm config set save-exact true
in my environment which will generate strict semver entries instead of a range like
^<version>
. Also make sure to check-in the package-lock.json if one got generated.
pro tip 2: don’t use
pulumi update
. Just use
pulumi up
as seen in most examples. Using
pulumi update
you will just confuse people like me that never have seen this command before 😄
and for your specific issue, it’s a bit hard to debug that issue just from your screenshot. Would need to know about your rough project structure. Concrete questions: 1. Where exactly is your package.json 2. In which working directory did you run
npm install
3. Where is your Pulumi.yaml 4. what value is in the “main” field of your package.json 5. What is the content of your Pulumi.yaml 6. In which working directory did you execute pulumi up?
b

brave-angle-33257

10/09/2018, 2:29 AM
#1 in the main root of the project
#2 from that folder
#3 same folder
#4 don't see a "main" (below)
#5 (below)
#6 same
$ cat package.json 
{
    "name": "core-rg",
    "devDependencies": {
        "@types/node": "latest"
    },
    "dependencies": {
        "@pulumi/azure": "latest",
        "@pulumi/pulumi": "latest",
    }
}
$ cat Pulumi.yaml 
name: core-rg
runtime: nodejs
description: A minimal Azure TypeScript Pulumi program
template:
  description: A minimal Azure TypeScript Pulumi program
  config:
    azure:environment:
      description: The Azure environment to use (`public`, `usgovernment`, `german`,
        `china`)
      default: public
$ pwd
~/projects/pulumi/core-rg

$ ls -l
total 120
-rw-------   1 me  staff    309 Oct  8 16:06 Pulumi.yaml
-rw-------   1 me  staff   1467 Oct  8 19:21 index.ts
drwxr-xr-x  74 me  staff   2368 Oct  8 17:31 node_modules
-rw-r--r--   1 me  staff  42378 Oct  8 18:58 package-lock.json
-rw-------   1 me  staff    265 Oct  8 18:58 package.json
drwxr-xr-x   3 me  staff     96 Oct  8 16:52 src
-rw-------   1 me  staff    522 Oct  8 19:19 tsconfig.json
g

glamorous-printer-66548

10/09/2018, 2:31 AM
Generally speaking the module loading in node works quite differently compared to python. The short answer is that when you run node it looks in the current working directory for a folder named “node_modules” and attempts to find the requested module inside this one. If it cannot find the particular module it looks for a “node_modules” directory in the parent folder etc. (this goes on recursively). The algoirthm is described a bit more in detail here: https://nodejs.org/api/modules.html#modules_loading_from_node_modules_folders . Compared to python a big advantage is that you don’t normally need something like virtualenv to have project specific dependencies.
b

brave-angle-33257

10/09/2018, 2:32 AM
right, yea i get that, and i see the module inside there
g

glamorous-printer-66548

10/09/2018, 2:32 AM
have you even added
js-yaml
as dependency?
i don’t see it in your package.json
b

brave-angle-33257

10/09/2018, 2:33 AM
yea, i did, i just did again as a test, it was gone in that one, here's what i just did
g

glamorous-printer-66548

10/09/2018, 2:33 AM
Note
@types/js-yaml !== js-yaml
b

brave-angle-33257

10/09/2018, 2:33 AM
$ cat package.json 
{
    "name": "core-rg",
    "devDependencies": {
        "@types/node": "latest"
    },
    "dependencies": {
        "@pulumi/azure": "latest",
        "@pulumi/pulumi": "latest",
        "@types/sprintf-js": "^1.1.0"
    }
}
g

glamorous-printer-66548

10/09/2018, 2:33 AM
the @types packages just contain typescript type definitions but not the actual module code.
b

brave-angle-33257

10/09/2018, 2:34 AM
ah
that may be a big bit i'm missing
ok, so if i want to use js-yaml in my code
i'd need to install @types/js-yaml and also js-yaml ?
that was it..
Diagnostics:
  pulumi:pulumi:Stack: core-rg-core-rg-dev
    info: { greeting: 'hello', name: 'world' }
import * as jsyaml from "js-yaml";

let doc = jsyaml.load('greeting: hello\nname: world');
console.log(doc)
$ npm install js-yaml --save
$ npm install @types/js-yaml --save
son of a gun
THANK YOU
g

glamorous-printer-66548

10/09/2018, 2:38 AM
Generally speaking in nodejs / npm world is like that: - some packages come out of the box with typescript definitions (for example that’s the case with all pulumi packages). When you look inside the node_modules directory of the pulumi packages you will see a bunch of
.d.ts
files with contain the types). - some packages (for example lodash) don’t come with typescript type definitions out of the box, often because the original author doesn’t use typescript himself. What then happens is that often the typescript community writes seperate type definitions for those packages which are published with the prefix @types, e.g. @types/lodash. So in case of lodash the actual runtime code is in the package “lodash” but if you want better type checking and code completion you CAN (you don’t have to, your code will run nevertheless) include the community type definitions by adding @types/lodash in addition to your package.json
b

brave-angle-33257

10/09/2018, 2:41 AM
ok i came across this earlier trying to load packages, found someones def of their own .d.ts file, realized that was a type definition
just one of those things, I assumed that the @types/ definition was a revamped package for TS not just the definition
really all i needed to get started on my Python->TS conversion was ability to load the yaml and create some strings like I was doing in python
rgname = '{}-{}-{}'.format(
            resource_type,
            env_id,
            region['id']
        )
past that, everything else is similar enough to the TS examples they have online
(i think)
g

glamorous-printer-66548

10/09/2018, 2:43 AM
k
b

brave-angle-33257

10/09/2018, 2:43 AM
I got the full python POC working, and showed it off some today, but the fact that the k8s implementation is also desired, and TS only.. I figured better to convert over to TS now at the beginning than deal with both SDKs later on
g

glamorous-printer-66548

10/09/2018, 2:43 AM
your code in JS would be
rgname = `${resource_type}-${env_id}-${region.id}`;
b

brave-angle-33257

10/09/2018, 2:44 AM
yea with the ticks... lol @ slack
you don't use azure do you?
g

glamorous-printer-66548

10/09/2018, 2:45 AM
I find those python
.format
things always so ugly to see 😄 . Fortunately we use python 3.6 internally which has support for f strings which is similar to JS template literals.
b

brave-angle-33257

10/09/2018, 2:46 AM
trying to find azure support is tough, ive posted on MSDN forum, reddit and azure-developer slack.. getting a weird issue trying to assign a user identity to a storage container, accounts and keyvaults work fine
g

glamorous-printer-66548

10/09/2018, 2:46 AM
no, we use GCP
many of the pulumi folks worked at Microsoft before, perhaps they can help you a bit 🙂
b

brave-angle-33257

10/09/2018, 2:47 AM
cool, well at least i have a workaround in place for that issue https://social.msdn.microsoft.com/Forums/en-US/b9592af3-85a1-4372-ab31-6d785153524e/programmatically-assignread-useridentity-to-storage-container?forum=windowsazuredata
is there a better channel to dump random dev questions in?
g

glamorous-printer-66548

10/09/2018, 2:48 AM
idk, I guess if it’s more of a azure question than pulumi question you should probably look for some azure slack or so.
b

brave-angle-33257

10/09/2018, 2:48 AM
yea was bummed that the pulumi python SDK is on python 2.7 😕 , 3+ doesn't work, they use "basestring" and other removed constructs in their SDK code
yea I joined one today and posted there, it's a ghost town
😄 1
g

glamorous-printer-66548

10/09/2018, 2:50 AM
For GCP we actually have an account manager and an architect we e-meet every two weeks, much more engaging than our past with AWS.
b

brave-angle-33257

10/09/2018, 2:51 AM
that's great.. yea our turnover with AWS reps at my old job was crazy, and they never knew answers to anything we asked, they just tell us to call enterprise support
we have some GCP infra up, but dont believe it's been via anything but console entry
i just got dumped into this TF mountain and asked to start making changes to things (but there are 37 scaleset files) and trying to push it to a better direction
absolute last thing i want to do is go edit 50 files manually, i feel anything at this point should be a refactor to something better
i looked at terragrunt, but it sure seems to be AWS centric, not even clear if it works for azure yet
and, the infra as code is so much more appealing
g

glamorous-printer-66548

10/09/2018, 2:53 AM
are you locked-in on Azure?
b

brave-angle-33257

10/09/2018, 2:53 AM
sorta, we have major incentives to stay there
outside of "hard to move" or infra requires it
g

glamorous-printer-66548

10/09/2018, 2:55 AM
Terragrunt definately also works without AWS. We used (and still use) it since a couple of months, first with AWS and now with GCP. I guess it has some extra features for AWS but they don’t seem essential to me.
Concretely we use Terragrunt with the Terraform GCS backend (instead of S3 as in the terragrunt examples)
b

brave-angle-33257

10/09/2018, 2:58 AM
ok, yea i read some about how it doesnt have some magic to create new state files/storage accounts like AWS, but that it should work with other providers
I'm going to give pulumi a few more days.. I had to tell the boss that it relies on their cloud services also for deployment.. that didn't go over well. But I did mention the local state, which I hope he wouldn't require us to use
And, we don't use github, so I created a github org under my work email and used that to login to Pulumi.. I'll try to keep that under wraps for bit heh
g

glamorous-printer-66548

10/09/2018, 3:00 AM
yeah, terragrunt creates a bucket and dynamodb table for state locking automatically, with other backends like GCS you have to create those resources once manually, but not a big deal. With the GCS backend we don’t even need dynamodb or similar because unlike S3, GCS has strong consistency and hence can be used as lock by itself.
b

brave-angle-33257

10/09/2018, 3:01 AM
im not sure if azure has a locking resource, but they havent been using it if there is from what i can tell, but its normally only one operator running any deploy at a time
g

glamorous-printer-66548

10/09/2018, 3:02 AM
Yeah well I think in a couple of months pulumi will get support for blob storage like S3, GCS as remote backend (similar to terraform). There’s a couple of open issues for that, but it should just take a few brave community contributions to make that happen 🙂
as per terraforms docs, Azure storage has built-in locking capabilities like GCS: https://www.terraform.io/docs/backends/types/azurerm.html . Shame on S3 which doesn’t have it out of the box 😄
b

brave-angle-33257

10/09/2018, 3:04 AM
ah, good to know
alright, well i feel like i have a pretty good idea now of where i'm at with pulumi
probably should give terragrunt a bit more of a look also.. then pick one and get to it
one of the main reasons I started looking at pulumi as we're looking to implement a fairly dynamic user identity/role assignment type system based on a yaml configuration where we can specify machine roles, what they need access to etc
trying to do that in TF seemed daunting. I originally looked to building native TF out of python->json
the identities in azure (like iam accounts) are region based, so we need to create roles for about 10 diff machine types, each with different keyvault/storage access and then replicate those configs to 10~ regions
so that one region outage wont take down access (hopefully) to all machines looking to make oauth requests off those identities
g

glamorous-printer-66548

10/09/2018, 3:12 AM
I see. In GCP identities and lots of other things (e.g. VPC, storage buckets, load balancers) are global, so we don't have this problem fortunately 😂
b

brave-angle-33257

10/09/2018, 3:16 AM
yea coming from AWS, i've got about 6+ yrs exp there and a pro cert.. Azure has been.. interesting
having a machine come up and have credential-less access to a storage container is bleeding edge
they do a lot well, but their auth setup and identities compared to AWS IAM is not even close
thanks again for all your help, if I can ever help answer anything or if you need someone to bounce ideas off, it'd be my pleasure 👍
see ya later
g

glamorous-printer-66548

10/09/2018, 3:26 AM
hmm, I always found AWS a bit too complex (also wrt to IAM) and the quality of their hundreds of services is a bit of a hit-and-miss (littered with gotchas and weird limitations when you actually start using them). GCP is refreshingly simpler in many ways.
That GCP by default just reuses gsuite idendities for end users is really neat. Probably can be setup on AWS as well but never had the time to do so.
see ya
View count: 1