A production-ready PySpark project template with medallion architecture, Python packaging, unit tests, integration tests, coverage tests, CI/CD automation, Declarative Automation Bundles, and DQX data quality framework.
This project template is designed to boost productivity and promote maintainability when developing ETL pipelines on Databricks. It aims to bring software engineering best practicesβsuch as modular architecture, automated unit and integration testing, and CI/CDβinto the world of data engineering. By combining a clean project structure with robust development and deployment jobs, this template helps teams move faster with confidence.
Youβre encouraged to adapt the structure and tooling to suit your projectβs specific needs and environment.
Interested in bringing these principles in your own project? Letβs connect on Linkedin.
- Databricks Free Edition (Serverless)
- Databricks Runtime 18.0 LTS
- Databricks Unity Catalog
- Databricks Declarative Automation Bundles (former Databricks Asset Bundles)
- Databricks CLI
- Databricks Python SDK
- Databricks DQX
- PySpark 4.1
- Python 3.12+
- GitHub Actions
- Pytest
This project template demonstrates how to:
- structure PySpark code inside classes/packages, instead of notebooks.
- package and deploy code to different environments (dev, staging, prod).
- use a CI/CD pipeline with Github Actions.
- run unit tests on transformations with pytest package. Set up VSCode to run unit tests on your local machine.
- run integration tests setting the input data and validating the output data.
- isolate "dev" environments / catalogs to avoid concurrency issues between developer tests.
- show developer name and branch as job tags to track issues.
- utilize coverage package to generate test coverage reports.
- utilize uv as a project/package manager.
- configure job to run tasks selectively.
- use medallion architecture pattern.
- lint and format code with ruff and pre-commit.
- use a Make file to automate repetitive tasks.
- utilize argparse package to build a flexible command line interface to start the jobs.
- utilize Databricks Declarative Automation Bundles to package/deploy/run a Python wheel package on Databricks.
- configure jobs to run across multiple environments by generating environment-specific job definitions using the Databricks SDK.
- utilize Databricks DQX to define and enforce data quality rules, such as null checks, uniqueness, thresholds, and schema validation, and filter bad data on quarantine tables.
- utilize service principals to run production code
- utilize Databricks SDK for Python to manage workspaces and accounts and analyse costs. Refer to 'scripts' folder for some examples.
- utilize Databricks Unity Catalog and get data lineage for your tables and columns.
- utilize Databricks Lakeflow Jobs to execute a DAG and task parameters to share context information between tasks (see Task Parameters section). Yes, you don't need Airflow to manage your DAGs here!!!
- utilize serverless job clusters on Databricks Free Edition to deploy your pipelines.
For a debate on the use of notebooks vs. Python packaging, please refer to:
- The Rise of The Notebook Engineer
- Please donβt make me use Databricks notebooks
- this Linkedin thread by Daniel Beach
- this Linkedin thread by Ryan Chynoweth
- this Linkedin thread by Jaco van Gelder
Sessions on Databricks Declarative Automation Bundles, CI/CD, and Software Development Life Cycle at Data + AI Summit 2025:
- CI/CD for Databricks: Advanced Asset Bundles and GitHub Actions
- Deploying Databricks Asset Bundles (DABs) at Scale
- A Prescription for Success: Leveraging DABs for Faster Deployment and Better Patient Outcomes
Other:
- Goodbye Pip and Poetry. Why UV Might Be All You Need
- The Spark Revolution You Didnβt See Coming: How Apache Spark 4.0 in Databricks Just Changed Everything
databricks-template/
β
βββ .github/ # CI/CD automation
β βββ workflows/
β βββ onpush.yml # GitHub Actions pipeline
β
βββ src/ # Main source code
β βββ template/ # Python package
β βββ main.py # Entry point with CLI (argparse)
β βββ config.py # Configuration management
β βββ baseTask.py # Base class for all tasks
β βββ commonSchemas.py # Shared PySpark schemas
β βββ job1/ # Job-specific tasks
β β βββ extract_source1.py
β β βββ extract_source2.py
β β βββ generate_orders.py
β β βββ generate_orders_agg.py
β β βββ integration_setup.py
β β βββ integration_validate.py
β βββ job2/ # Additional job tasks
β
βββ tests/ # Unit tests
β βββ job1/
β β βββ unit_test.py # Pytest unit tests
β βββ job2/
β
βββ resources/ # Databricks workflow templates
β βββ jobs.yml # Generated job definition (auto-created)
β
βββ scripts/ # Helper scripts
β βββ sdk_generate_template_job.py # Job definition generator (Databricks SDK)
β βββ sdk_init.py # Workspace initialization
β βββ sdk_analyze_job_costs.py # Cost analysis script
β βββ sdk_workspace_and_account.py # Workspace and account management
β
βββ docs/ # Documentation assets
β βββ dag.png
β βββ task_output.png
β βββ data_lineage.png
β βββ data_quality.png
β βββ ci_cd.png
β
βββ dist/ # Build artifacts (Python wheel)
βββ coverage_reports/ # Test coverage reports
β
βββ databricks.yml # Declarative Automation Bundle config
βββ pyproject.toml # Python project configuration (uv)
βββ Makefile # Build automation
βββ .pre-commit-config.yaml # Pre-commit hooks (ruff)
βββ README.md # This file
-
Create a workspace. Use a Databricks Free Edition workspace.
-
Install and configure Databricks CLI on your local machine. Check the current version on databricks.yaml. Follow instructions here.
-
Build Python env, execute unit tests on your local machine.
make sync & make test -
Create an external location in Databricks and update the "storage-root" parameter in the Makefile. This step will create the catalogs, schemas, service principal, and the required grants. For more details, see Overview of external locations. Then run:
make init -
Generate a secret for the service principal. In Databricks, go to: Workspace -> Settings -> Identity and access -> Service principals -> Secrets. Generate a new secret for your service principal and update the corresponding profiles in your .databrickscfg file. Your configuration should look similar to this:
[dev] host = https://xxxx.cloud.databricks.com/ token = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb [staging] host = https://xxxx.cloud.databricks.com/ client_id = yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy client_secret = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa [prod] host = https://xxxx.cloud.databricks.com/ client_id = yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy client_secret = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa -
Deploy and execute on the dev workspace.
make deploy env=dev -
Configure CI/CD automation. Configure Github Actions repository secrets (DATABRICKS_HOST, DATABRICKS_PRINCIPAL_ID, DATABRICKS_SECRET).
-
You can also execute unit tests from your preferred IDE. Here's a screenshot from VS Code with Microsoft's Python extension installed.
- task (required) - determines the current task to be executed.
- env (required) - determines the AWS account where the job is running. This parameter also defines the default catalog for the task.
- user (required) - determines the name of the catalog when env is "dev".
- schema (optional) - determines the default schema to read/store tables.
- skip (optional) - determines if the current task should be skipped.
- debug (optional) - determines if the current task should go through debug conditional.





