Skip to main content

Data Engineering Best Practices

This page is still W.I.P and solely exists for my own reference.

Avoid Multiple SparkSessions and SparkContexts

Creating multiple SparkSessions and SparkContexts can cause problems. It's a best practice to use the SparkSession.builder.getOrCreate() method. This gives you an existing SparkSession if there's one around, or it makes a new one if needed.

# Import SparkSession from pyspark.sql
from pyspark.sql import SparkSession

# Create spark_session
spark_session = SparkSession.builder.getOrCreate()

# Print spark_session
print(spark_session)

๐Ÿ“Œ BigQuery Best Practice

Cost reduction

  • Avoid SELECT *
  • Price your queries before running them
  • Use clustered or partitioned tables
  • Use streaming inserts with caution
  • Materialise query results in stages

Query performance

  • Filter on partitioned columns
  • Denormalising data
  • Use nested or repeated columns
  • Use external data sources appropriately (Don't use external data sources, in case you want a high query performance)
  • Reduce data before using a JOIN
  • Do not treat WITH clauses as prepared statements
  • Avoid oversharding tables
  • Avoid JavaScript user-defined functions
  • Use approximate aggregation functions (HyperLogLog++)
  • Order Last, for query operations to maximize performance
  • Optimise your join patterns
  • As a best practice, place the table with the largest number of rows first, followed by the table with the fewest rows, and then place the remaining tables by decreasing size.

๐Ÿ“Œ dbt Best Practices Checklist

๐ŸŸข Model & Query Structure

โœ… Replace hard-coded table names with src() or ref() โ†’ Ensures modularity, dynamic references, and better lineage.
โœ… Refactor queries based on a style guide (e.g., imports, CTEs, formatting) โ†’ Improves readability and maintainability.
โœ… Move any source-centric transformations to the staging layer โ†’ Prevents duplication, maintains modularity.
โœ… Break out staging models with joins into separate 1:1 models โ†’ Improves DAG readability and modular design.
โœ… Ensure each model has a clear purpose โ†’ Staging models should clean data; marts should aggregate for reporting.
โœ… Review the DAG to prevent circular dependencies in marts โ†’ Avoids difficult-to-debug pipelines.

๐Ÿ” Debugging & Issue Resolution

โœ… Check the /target/ folder when models fail โ†’ View compiled SQL for easier debugging.
โœ… Debug macros by isolating logic, printing logs, or running snippets โ†’ Helps pinpoint issues efficiently.
โœ… Use dbt debug before running models โ†’ Ensures database connection and configuration are correct.

๐Ÿš€ Performance Optimisation (For Big Data & PostgreSQL)

โœ… Use is_incremental() for large tables instead of full refreshes โ†’ Reduces load, optimises compute costs.
โœ… Partition large tables (if supported by the warehouse) โ†’ Improves query performance.
โœ… Leverage indexes on primary keys & frequently queried columns โ†’ PostgreSQL benefits from well-defined indexes.
โœ… Prefer table materialisation over view for expensive transformations โ†’ Reduces computation overhead.
โœ… Ensure proper clustering & ordering of data for faster retrieval โ†’ Especially for frequently queried columns.

๐Ÿงช Data Quality & Testing (Great Expectations & dbt Tests)

โœ… Define unique and not_null tests in schema.yml โ†’ Ensures data integrity and quality.
โœ… Use dbt test to validate relationships & referential integrity โ†’ Avoids orphaned or duplicate records.
โœ… Add column-level constraints where applicable (e.g., data types, ranges) โ†’ Prevents unexpected data issues.

๐Ÿ“Œ dbt Configuration & Best Practices

โœ… Move model-specific configs from dbt_project.yml to individual models (if needed) โ†’ Reduces project-level clutter.
โœ… Use config() blocks for clear, inline model settings โ†’ Makes it easier to track per-model configurations.
โœ… Define meaningful descriptions in schema.yml for documentation โ†’ Helps maintain clarity on model usage.
โœ… Use meta fields for additional metadata (e.g., owner, SLA) โ†’ Useful for governance and documentation.

๐Ÿ“Š Monitoring & Logging

โœ… Enable dbt logging (dbt logs) to track run issues โ†’ Helps diagnose failures faster.
โœ… Regularly check dbt source freshness for data latency issues โ†’ Ensures upstream sources are updated as expected.
โœ… Automate dbt runs with scheduling (e.g., Airflow, dbt Cloud) โ†’ Ensures timely updates and monitoring.


โœ… Why These Are dbt Best Practices?

These practices follow dbtโ€™s core philosophy of:

  • Modularity โ†’ Keeping transformations in the right layers (staging, marts, etc.).
  • Maintainability โ†’ Using style guides, removing hard-coded values, and structuring queries well.
  • Performance Optimisation โ†’ Leveraging is_incremental(), indexing, materialisation strategies.
  • Data Quality โ†’ Using dbt test, schema validation, and referential integrity.
  • Debugging & Monitoring โ†’ Checking /target/, logging, DAG review, dbt source freshness.

These are widely recommended in the dbt community and by teams managing large-scale production dbt pipelines.

This page is still W.I.P and solely exists for my own reference.