happy-egg-47291
10/05/2018, 5:51 PMaws.lambda.Function
object with a cloud.API endpoint?busy-umbrella-36067
10/05/2018, 8:06 PM$ pulumi preview
Previewing update of stack 'XXXXXXXX/XXXXXXX-dev'
error: could not find plugin for provider 'XXXXXXXX-dev::XXXXXXX::pulumi:providers:kubernetes::default'
busy-umbrella-36067
10/05/2018, 8:09 PMbusy-umbrella-36067
10/05/2018, 8:09 PMcreamy-potato-29402
10/05/2018, 8:10 PMcreamy-potato-29402
10/05/2018, 8:11 PMfull-dress-10026
10/05/2018, 8:56 PMpulumi update
did not fail.full-dress-10026
10/05/2018, 8:59 PMname
to a new aws.cloudformation.Stack
and a CF stack with that name exists, will it use that CF stack?dazzling-scientist-80826
10/05/2018, 11:33 PMfailed to verify snapshot: resource .... refers to unknown provider ...
- but as far as i can tell, the provider is defined in the snapshotdazzling-scientist-80826
10/05/2018, 11:33 PMdazzling-scientist-80826
10/05/2018, 11:33 PMdazzling-scientist-80826
10/05/2018, 11:39 PMdazzling-scientist-80826
10/05/2018, 11:39 PMdazzling-scientist-80826
10/05/2018, 11:39 PMbig-piano-35669
dazzling-scientist-80826
10/05/2018, 11:48 PMdazzling-scientist-80826
10/05/2018, 11:48 PMdazzling-scientist-80826
10/05/2018, 11:48 PMdazzling-scientist-80826
10/05/2018, 11:49 PM"delete": true
and got an assertion failure:dazzling-scientist-80826
10/05/2018, 11:49 PMdazzling-scientist-80826
10/05/2018, 11:52 PMdazzling-scientist-80826
10/05/2018, 11:52 PMdazzling-scientist-80826
10/05/2018, 11:54 PMpulumi update
is finally successful again, but i’ll admit, i have zero confidence that this snapshot file matches reality 😛brave-angle-33257
10/06/2018, 12:16 AMgifted-island-55702
10/06/2018, 8:13 PMglamorous-printer-66548
10/07/2018, 12:40 AMinfrastructure-shared
repository which contains a few pulumi programs which setup the base infrastructure that is shared by many apps. this includes for example: GCP projects, GKE clusters, cluster-wide addons (i.e. cert-manager, kube external-dns). This repo contains multiple pulumi programs for different gcp projects and each of them usually just a single stack instance. They share a few parameters / constants which are defined as simple JS / TS files under a common
directory at the project root.
2. we have a single pulumi-util
repo which contains an internal reuse npm library (important: this is just a library, not a pulumi program itself) that codifies a couple of common pulumi patterns. E.g. it contains a common dockerfile to build a nodejs app, it contains a SolvvyApp
(solvvy is the name of my company) class which creates a docker Image, Deployment and depending on the configuration also a Service or Ingress resource. This is basically a much enhanced version of https://github.com/pulumi/examples/blob/master/kubernetes-ts-guestbook/components/k8sjs.ts#L9 . It also contains a few utilities to deal with GKE particularly.
3. We have multiple application repositories (7 or so) which contain the application code for those aforementioned 15-20 services (some of those repos are a mono-repo, so contain multiple services). In each of those app repos we have a subdirectory infra
which makes heavy use of our own pulumi-util library to build and deploy the app to the clusters that have been setup prior to that via the infrastructure-shared
repository. One important side note is that we even setup some core infrastructure like redis and rabbitmq inside the app repositories when they are not shared by most of the apps (which is the case for redis and rabbitmq which are really only used by one application each). Typically the code inside each app repository is very brief because it reuses abstractions from our pulumi-util library. E.g. this is how the code in one of our apps infra
directory:
import { SolvvyApp } from '@solvvy/pulumi-util';
const app = new SolvvyApp({
buildContext: __dirname + '/../',
service: {
internalHttpPort: 1337,
expose: 'VPCInternal'
},
env: {
NODE_ENV: 'k8s'
}
});
export const url = app.url;
This little code builds a docker image, pushes it to gcr, creates a k8s service of type load balancer which creates a GCP internal TCP load balancer, assigns a dns entry via kube external-dns. etc. The application name is inferred by reading the package.json under buildContext (solvvyapp contains that logic) and the environment name (which gets translated to a namespace) is inferred from the last segment of the stack name (the string after the last ‘-’ in the stack name).glamorous-printer-66548
10/07/2018, 12:40 AMconfig:
gcp:project: my-gcp-project
'@solvvy/pulumi-util:cluster': my-gke-cluster
This information is used by the pulumi-util library to dynamically create a pulumi k8s provider onto which the application will be deployed. Concretely the following code is used to create a k8s provider from the the cluster name: https://gist.github.com/geekflyer/b78adab2667d8526a1dd593bc5c844bf#file-gke-ts
SolvvyApp under the hood simply calls getK8sProviderFromInferredCluster()
(https://gist.github.com/geekflyer/b78adab2667d8526a1dd593bc5c844bf#file-gke-ts-L29) to get a k8s provider. Those gke utilities basically make use of the getCluster
function of pulumi https://pulumi.io/reference/pkg/nodejs/@pulumi/gcp/container/#getCluster to read in some stuff of existing cloud resources (that have been created by another program / stack).
In general pulumi has a lot of those get<Something>
functions to read in parameters of some cloud resource that has been defined elsewhere.
I’m honestly not sure how one can use stack outputs from one pulumi program as inputs to another (without manual copy and paste) and It’d be curious on some example for this too (@big-piano-35669?)glamorous-printer-66548
10/07/2018, 12:50 AMgcloud auth activate-service-account --project <project_of_service_account> --key-file=-
glamorous-printer-66548
10/07/2018, 12:59 AMglamorous-printer-66548
10/07/2018, 2:02 AMAt Solvvy we use pulumi to deploy and manage our infrastructure E2E.
Generally speaking we can divide our infrastructure into two kinds of “Resources”:
1. Low Level and Shared Resources, i.e. VMs, Kubernetes Clusters, Cluster-Wide services or addons (i.e. gitlab-runner, cert-manager, external-dns)
2. App Resources, i.e. for a service like the accounts-api. Typically each app is comprised of resources like a docker image and runtime container, some networking config, a service account, possibly a DNS name and optionally some app specific backing services, i.e. rabbitmq or redis.
Each application may have unique requirements for infrastructure, use unique libraries / frameworks and have an independent deployment lifecycle. Therefore we try to build a model where the app-specific infrastructure code (i.e. rabbitmq for app1, redis for app2) is held in the same location as the application, concretely in the repository of the application itself (i.e. github repo of app1). Only resources which are very common or shared between apps by technical necessity are defined in a central infrastructure-only repository (i.e. a kubernetes cluster). One advantage of this code organization is that in order to introduce a new app / service (or fit the infrastructure to new requirements of a drastically changed app), one has to only create a single Pull Request / Commit in the app repository and not an additional dependent PR in the shared infra repo. Philosophically speaking the idea behind putting infrastructure code into the app repo is to empower developers to develop and run their application end-to-end, delivering on the original goals of a “DevOps” culture vs. relying on strict division of labor and inefficient dependencies between Dev ←→ Ops in the old world. Pulumi and kubernetes are two very important ingredients to achieve this - they basically provide a common, approachable platform for consuming, building and sharing abstractions on top of low level infrastructure. I.e. Pulumi uses a general purpose language like TypeScript and npm libraries for defining infrastructure (vs an infrastructure-only DSL like Terraform’s Hashicorp Configuration Language) and with kubernetes you don’t need to be a bash + tmux superhero just to see the logs of all your app instances. This makes creating and managing infrastructure much more approachable for the average developer.