Hi all, I have the following use case. Here shari...
# general
Hi all, I have the following use case. Here sharing an example.
Copy code
import pulumi
import pulumi_azure_native as azure_native

# Create an Azure Resource Group
resource_group = azure_native.resources.ResourceGroup('resource_group')

# Create an Azure Container Registry
container_registry = azure_native.containerregistry.Registry(
        name='Basic',  # Change to 'Standard' or 'Premium' for production workloads

# Create an App Service Plan with Linux
service_plan = azure_native.web.AppServicePlan(
    reserved=True,  # This is required for Linux plan creation
        name='B1',  # Change as needed for scaling
        tier='Basic',  # Change as needed for scaling

# Create an App Service using an image from the Container Registry
app_service = azure_native.web.WebApp(
                    lambda enabled: pulumi.secret(container_registry.admin_user_password) if enabled else pulumi.secret("")

# Export the Azure App Service endpoint
pulumi.export('app_service_endpoint', app_service.default_host_name.apply(lambda host: f'http://{host}'))
So essentially pretty basic setup with a container base app. So basically, create an "App Service Plan" and an "App" then a "Azure Container Registry". This app is going to always use the latest Docker image. Below is what I want to do; 1. Use this example to create the new resources - this is one time operation 2. Use the relevant github repository of a particular app and run the existing github workflow for the first time so that this new resource from the start includes the latest app updates 3. Generally, we have a simple pipeline from github where whenever a new PR is approved and merged with the main branch we execute the relevant workflow(s) to push the latest changes to the Azure Registry Maybe I am not doing or thinking in the right way, so if someone can provide a better approach, will be really grateful.
And what exactly isn't working in this setup for you? How does your GHA workflow look like? Assuming that you're not seeing your app service pick up newly built containers: • Please note that App Service doesn't automatically pull & redeploy the newest manifest for a given container tag, unless you restart the app service, which of course isn't optimal. For that you have two options: ◦ Use the Azure/webapps-deploy action to trigger an update. Recommend going with the az login with a service principal example instead of using a publish-profile. ◦ Container Registry webhooks – which I would advise against (IMO deployments should happen within the CI/CD workflow where you see the deployment failing) And some feedback on the example you provided: • Disable registry admin credentials ◦
◦ Enable managed identity on the App Service:
◦ Assign the AcrPull role on the App Service. See this gist for how to do that in Pulumi/Python. ◦ Remove the docker login related ENV vars ◦ Set this option in site_config:
• Use OIDC credentials in your workflow instead of client credentials. I used this bash script in the past before migrating the provisioning of this Pulumi (can't share that right now). It will automatically create the service principal and store the credential as an environment secret in your GitHub repo if
cli is present. Otherwise see GitHub and Azure docs. • Don't use
tags for containers as it's an anti-pattern and can cause various problems and confusion ◦ Instead use the docker-metadata-action to automatically tag your images with branch name, github sha. Pull request builds will get a pr-number tag. Can even support semver tags if you implement that. ◦ This makes it much clearer what version of your container is actually deployed • For App Service production workloads which need zero-downtime: ◦ Go with the Premium tier app services ◦ Create deployment slots ◦ Have your CI/CD workflow deploy release candidates to a staging slot and then swap that into production ◦ I've used az cli in a workflow for doing that like so: ◦
az webapp deployment slot swap -s slot_name -n app_name -g resource_group
Thank you very much for the suggestions and sharing the best practices, I will consider them. The idea behind sharing this was to start from somewhere as an example rather than just a generic question. Here is an example of workflow; Is there anyway that I can execute this workflow after creating the resources via Pulumi but then the problem is I need to pass the right credentials and name of the azure resources before executing it. Maybe I am doing it all wrong?
Copy code
name: Build and deploy - Test

      - '**/README.md'
      - '**/.github/workflows/**.yml'
      - develop

    runs-on: ubuntu-latest
    environment: Testing
      - uses: actions/checkout@v2
      - uses: psf/black@stable

    runs-on: ubuntu-latest
    environment: Testing

    - uses: actions/checkout@v2

    - uses: azure/docker-login@v1
        login-server: ${{ secrets.ACR_DEV_SERVER }}
        username: ${{ secrets.ACR_DEV_USERNAME }}
        password: ${{ secrets.ACR_DEV_PASSWORD }}

    - run: |
        docker build -t ${{ secrets.ACR_DEV_SERVER }}/repo-name:${{ github.sha }} .
        docker push ${{ secrets.ACR_DEV_SERVER }}/repo-name:${{ github.sha }}
    - uses: azure/webapps-deploy@v2
        app-name: 'App Name'
        publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
        images: '${{ secrets.ACR_DEV_SERVER}}/repo-name:${{ github.sha }}'
Think of it as Using Pulumi for the first time to spin up new resources and then using some github repo and associated workflows to update the data in those resources automatically Imagine I want to create a storage account and then want to put some data say ‘stable-data’ in that resource Currently, they way I see it is like this; • Create a Pulumi script • Execute it and create the relevant resource in Azure • Get the credentials information of the resource • Use those as secrets and variables in a github workflow file [manually] • Execution the workflow as usual manual or automated in Github e.g when a PR is merged it is executed Is it possible to automate the whole flow described above even the manual part via Pulumi
Sure, you could automate all of that in a single workflow... For doing Pulumi deployments from within GHA you can use the pulumi-cli-action • You'll need to create a Pulumi access token if you want to use Pulumi cloud to store the state • Set
to where your Pulumi code lives •
command: up | preview
If you export outputs from the stack these can then be referenced in consecutive workflow steps - assuming your pulumi step has the
id: pulumi
and you have a stack output named
${{ steps.pulumi.outputs.registry_endpoint }}"
And if you install the Pulumi GitHub App you will get automatic comments on your pull requests with the results of a pulumi action.
Thanks alot I will try and get back to you when required