Fabric | dbt – Docker dbt and Azure Container Apps (CI/CD)

For the cloud-based Warehouse built on top of MS Fabric, we already have a prepared Lakehouse and DWH environment and, among other things, a configured dbt project. Now comes an important DataOps phase: we need to think about

  1. From which environment (ideally serverless) we will batch-run the dbt project in the future.
  2. How to implement a Continuous Integration and Continuous Delivery (CI/CD) process that ensures automatic and secure deployment of verified transformation code from the repository before each dbt execution.

Azure Container Apps Jobs (ACA Jobs) 1 provides an ideal serverless platform for running one-time or scheduled batch tasks, which form the foundation of ETL/ELT processes. This technical guide explains in detail how to properly containerize and run these critical DWH components.

Introduction to Azure Container Apps

Azure Container Apps (ACA) is a service for running containerized applications without the need to manage infrastructure (Kubernetes, VMs, etc.). It supports both long-running services (APIs, web apps) and short-lived jobs. The key features include scalability, integration with Azure Monitoring, networking, and secure connectivity to other Azure services (e.g., Key Vault, Storage, Database, Event Grid).

Azure Container App Jobs

Since we will be operating a data solution built on dbt that runs in batch mode once per day, the most relevant service for us is Azure Container Apps Jobs. It is a special type of Container App designed for batch, scheduled, or event-driven jobs that do not run continuously. It is suitable for use cases such as:

  • periodic batch processing (cron),
  • CI/CD steps,
  • data transformations — e.g., executing dbt pipelines.

Types of jobs:

  • Manual – executed manually via CLI or API,
  • Scheduled – executed automatically based on a crontab schedule,
  • Event-driven – triggered by events from Azure Event Grid or Service Bus.

Each job runs as a short-lived container with a defined command (e g dbt run or dbt test) or without one (the command is defined inside the container).

Cost: In the Basic tier (up to 10GB) approximately 5 USD/month for ~2 hours of runtime per day.

Azure Container Registry (ACR)

To run any job, we first need a secure place to store our Docker image. ACR is a fully managed private repository for Docker images. The main benefits include:

  • secure image storage within an Azure subscription,
  • integration with Azure Container Apps, AKS, Azure DevOps, and GitHub Actions,
  • authentication using Managed Identity (no need to store passwords),
  • automatic builds or importing images from public registries (e.g., Docker Hub).

Cost: around 5 USD/month

Containerizing the dbt Project and Storing It in Azure Container Registry

Now that we know where our container will run, the next step is to create it. To build the Docker image:

  1. Create a new file in the dbt project called Dockerfile (-> open to see script) – this defines what the container environment includes: Python, dbt adapters, ODBC Driver 18, and the definition of the run script.
  2. Create another file in the dbt project named Dockerfilerun (-> ope to see script). This file defines what should be executed inside the Docker container.
  3. Next, build the image by running the following command in the terminal. Secrets (GIT PAT and client secret for Fabric connection) will be passed later in Azure — embedding them in Docker is a security risk.

docker buildx build –platform linux/amd64 \
–secret id=github_pat,src=/Users/janzednicek/pat_git.txt \
–no-cache \
-t dbt:latest .

dbt-docker-image

4) Test the Docker image locally using the docker run command, providing the required environment variables (secrets). If successful, the container should execut dbt snapshot and then dbt run, as defined in the Dockerfilerun file.
docker run –rm \
-e DBT_SECRET=”f7y8Q………” \
-e GITHUB_PAT=”github_pat_…….” \
dbt:latest

5) If the image works correctly, push it to the Azure Container Registry created earlier:

az login
az acr login –name containerregjanzednicek
docker tag dbt:latest containerregjanzednicek.azurecr.io/dbt:latest
docker push containerregjanzednicek.azurecr.io/dbt:latest

dbt-docker-push-container-reg

6) After pushing the image, you can verify it in your Azure Portal resource. The final step is to execute it via Azure Container App Jobs.

dbt-docker-image-azure-portal

Running dbt through Azure Container App Jobs

  1. Create a new Container App Job resource.
  2. Configure the container as shown in the image — Container Apps will automatically detect the Docker image stored in ACR. Do not specify any command or arguments; everything is controlled by the run script inside the Docker image.

  1. Since we externalized secrets into variables, configure the Environment variables — we need two of them: DBT_SECRET for Fabric authentication and GITHUB_PAT for repository access before dbt runs. Each variable refers to a secret stored either in Container Secrets (Settings → Secrets) or Azure Key Vault. If using Key Vault, make sure proper access rights are configured.

container-app-jobs-secret

  1. Now we can test the functionality:
  • Create a new file hello_world.sql in the dbt project.

  • Push it to the repository,
  • Run the Container App Job.

container-app-job--monitor

  • Finally, verify that the table appears in Fabric.

Rate this post

Reference

  1. Microsoft documentation, Azure Container Apps [online]. [cit. 2025-10-25]. Available at: https://azure.microsoft.com/en-us/products/container-apps
Category: Dbt - Data Build Tool Fabric

About Ing. Jan Zedníček - Data Engineer & Controlling

My name is Jan Zednicek, and I have been working as a freelance Data Engineer for roughly 10 years. During this time, I have been publishing case studies and technical guides on this website, targeting professionals, students, and enthusiasts interested in Data Engineering particularly on Microsoft technologies as well as corporate finance and reporting solutions. 🔥 If you found this article helpful, please share it or mention me on your website or Community forum

Leave a Reply

Your email address will not be published. Required fields are marked *