I have deployed keycloak with pulumi python code u...
# python
p
I have deployed keycloak with pulumi python code using helm charts on azure AKS. deployment was successful. then I deployed springboot app using pulumi python on same azure AKS cluster. but this time using docker image from azure docker registry. this deployment was also successful. Now the weird part is that the previous deployment has disappered. and it is happening everytime. on top of that it even happens if the namespaces are also different for both the deployments Iam new to iac. Can somebody help plz.
a
Hi @proud-portugal-76914. Are you able to post or link to your code? That would make it much easier to figure out what is going on.
p
pulumi code is invoked by rest api. here is the simplified version of the code: first keycloak is deployed by calling @app.route('/kc', methods=['POST']) routes.py:
@app.route('/kc', methods=['POST'])
def deploy(): //args are coming from request.json threading.Thread(target=deploy_kc.deploy, args=( keycloak_namespace, aks_cluster_name, resource_group_name, keycloak_yaml, project_name, stack_name))
deploy_kc.py:
def deploy_keycloak():
release = Chart( "keycloak", ChartOpts( chart="keycloakx", version="2.3.0", fetch_opts=FetchOpts( repo="https://codecentric.github.io/helm-charts" ), values=yaml.safe_load(get_config('keycloak_yaml')), namespace=get_config('keycloak_namespace') ) # , opts=pulumi.ResourceOptions(depends_on=[mysql_chart]) ) def deploy(keycloak_namespace, aks_cluster_name, resource_group_name, keycloak_yaml, project_name, stack_name): loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) subprocess.run(f"az aks get-credentials -n {aks_cluster_name} -g {resource_group_name} --overwrite-existing", shell=True, check=True) stack = auto.create_or_select_stack( stack_name=stack_name, project_name=project_name, program=deploy_keycloak ) stack.refresh() up_res = stack.up() print(f"update summary: {up_res.summary.resource_changes}") // update summary: {'create': 7, 'delete': 1, 'same': 1}
then springboot app is deployed by calling: @app.route('/deploy_img', methods=['POST']) routes.py:
@app.route('/deploy_image', methods=['POST'])
def deploy_resources(): data = request.json threading.Thread(target=deploy_docker_img.deploy, args=(data,)).start()
deploy_docker_img.py
def deploy(request_data):
try: loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) set_config("request_data", request_data) subprocess.run(f"az aks get-credentials -n {aks_cluster_name} -g {resource_group_name} --overwrite-existing", shell=True, check=True) stack = auto.create_or_select_stack( stack_name=request_data.get('stack_name'), project_name=request_data.get('project_name'), program=deploy_resources ) stack.refresh() up_res = stack.up() print(f"update summary: {up_res.summary.resource_changes}") // update summary: {'create': 3, 'delete': 7, 'same': 1} def deploy_docker_image(): request_data = get_config('request_data') app_labels = {"app": request_data.get('app_name')} config_map = create_config_map(request_data) # Deployment deployment = k8s.apps.v1.Deployment( request_data.get('deployment_name') or request_data.get('app_name') + "-deployment", metadata={ "namespace": request_data.get('namespace'), "name": request_data.get('deployment_name') or request_data.get('app_name') + "-deployment", }, spec={ "selector": {"matchLabels": app_labels}, "replicas": request_data.get('replica_count') or 1, "template": { "metadata": {"labels": app_labels}, "spec": { "containers": [{ "name": request_data.get('app_name'), "image": request_data.get('image_name'), "ports": [{"containerPort": request_data.get('port') or 9087}], "volumeMounts": [{ "name": "config-volume", "mountPath": "/config" }], }], "volumes": [{ "name": "config-volume", "configMap": { "name": config_map.metadata.name } }], "image_pull_secrets": [{"name": request_data.get('image_pull_secret')}] } } } ) service = k8s.core.v1.Service( "kcadminapp-service", metadata={ "namespace": request_data.get('namespace'), "name": "kcadminapp-service" }, spec={ "selector": app_labels, "ports": [{ "protocol": "TCP", "port": 80, "targetPort": request_data.get('port') or 9087 }], "type": "LoadBalancer" } ) pulumi.export('deployment_name', deployment.metadata.name) def create_config_map(request_data): config_data = { "application.properties": f""" server.servlet.context-path=/{request_data.get('app_name')} .... keycloak.smtp.ssl=false """ } config_map = k8s.core.v1.ConfigMap( request_data.get('config_map_name') or "kc-admin-config", metadata={"namespace": request_data.get('namespace'), "name": request_data.get('config_map_name') or "kc-admin-config"}, data=config_data ) return config_map
@ancient-policeman-24615 did you get a chance to look into the issue
a
Lookin at the program, I don’t actually see where any keycloak resources are deployed. I just see k8s. I would ask about that in #kubernetes.
In the future, it’s much easier to read if you post a gist instead of a big wall of unformatted test. This is your code: https://gist.github.com/iwahbe/73b4faf19ea3e0f03201f4c2d2112e68
I’m not super familiar with k8s, but my initial concern is that you are using two different programs with the same stack. Every time you run
stack.up()
, Pulumi will deploy the resources you described in the stack’s program, and delete all resources not described. If you run
stack.up()
with Program A, then run
stack.up()
with Program B, all resources from Program A not mentioned in Program B will be destroyed.
p
you are abosulutely right. but this is usual scenario. I wat to add up to the previous deployments. how can we avoid destroying the existing ones. I know changing the stack can do the job or imprting existing resources. Dnt understand why pulumi is so naive. its really frustrating
l
@proud-portugal-76914 are you deploying Keycloak and the sprinboot app with separate Pulumi runs in the same stack? If it is the same stack, then it is perfectly explainable because Pulumi is declarative: with your code, you model your desired end state and Pulumi will make sure that the end result is what you modelled. Resources you "delete" from your code will be actually removed from the infrastructure at next Pulumi run. For more info on this, see my blog article here: https://www.pulumi.com/blog/pulumi-is-imperative-declarative-imperative/