https://pulumi.com logo
#general
Title
# general
g

glamorous-printer-66548

10/07/2018, 12:40 AM
@gifted-island-55702 @quiet-wolf-18467 coming back to your question earlier: So we use GCP and GKE in my company too. We run about 15-20 micro-services / apps, mostly written in nodejs, some in Python and Java, on that stack. Just a small warning: This whole GCP / GKE / pulumi stuff is currently not in production, but we’re able to to spin up entire dev environments and plan to bring this to production in the next months. In general our pulumi code structure is like this: 1. we have a single
infrastructure-shared
repository which contains a few pulumi programs which setup the base infrastructure that is shared by many apps. this includes for example: GCP projects, GKE clusters, cluster-wide addons (i.e. cert-manager, kube external-dns). This repo contains multiple pulumi programs for different gcp projects and each of them usually just a single stack instance. They share a few parameters / constants which are defined as simple JS / TS files under a
common
directory at the project root. 2. we have a single
pulumi-util
repo which contains an internal reuse npm library (important: this is just a library, not a pulumi program itself) that codifies a couple of common pulumi patterns. E.g. it contains a common dockerfile to build a nodejs app, it contains a
SolvvyApp
(solvvy is the name of my company) class which creates a docker Image, Deployment and depending on the configuration also a Service or Ingress resource. This is basically a much enhanced version of https://github.com/pulumi/examples/blob/master/kubernetes-ts-guestbook/components/k8sjs.ts#L9 . It also contains a few utilities to deal with GKE particularly. 3. We have multiple application repositories (7 or so) which contain the application code for those aforementioned 15-20 services (some of those repos are a mono-repo, so contain multiple services). In each of those app repos we have a subdirectory
infra
which makes heavy use of our own pulumi-util library to build and deploy the app to the clusters that have been setup prior to that via the
infrastructure-shared
repository. One important side note is that we even setup some core infrastructure like redis and rabbitmq inside the app repositories when they are not shared by most of the apps (which is the case for redis and rabbitmq which are really only used by one application each). Typically the code inside each app repository is very brief because it reuses abstractions from our pulumi-util library. E.g. this is how the code in one of our apps
infra
directory:
Copy code
import { SolvvyApp } from '@solvvy/pulumi-util';

const app = new SolvvyApp({
  buildContext: __dirname + '/../',
  service: {
    internalHttpPort: 1337,
    expose: 'VPCInternal'
  },
  env: {
    NODE_ENV: 'k8s'
  }
});

export const url = app.url;
This little code builds a docker image, pushes it to gcr, creates a k8s service of type load balancer which creates a GCP internal TCP load balancer, assigns a dns entry via kube external-dns. etc. The application name is inferred by reading the package.json under buildContext (solvvyapp contains that logic) and the environment name (which gets translated to a namespace) is inferred from the last segment of the stack name (the string after the last ‘-’ in the stack name).