Delta Lake
Delta Lake
Bronze / Raw Layer: This is the docking layer for upstream data sources to ingest raw data.
The bronze layer is for storing raw files or data as it arrives. This layer is a source layer for downstream silver layers.
Silver / Aggregate Layer: This is the central layer where ETL and aggregations take place.
The silver layer is where we perform data quality checks, governance, joins, and aggregations.
Gold / Business Layer: This is the final layer where business-level use-case-specific aggregations take place.
The gold layer is the source for dashboards, analytics, and BI tasks.
Delta lake supports checkpointing with write-ahead logs / Journaling to overcome data loss with robust recovery
mechanisms and guarantees.
To achieve durability in an operation, databases and file systems offer write-ahead logs. Delta lake makes use of this
feature through checkpointing
Version format-
The autoloader is Databricks’ recommended way to handle streaming data efficiently and with ease. Autoloader offers
a way to read data from cloud object storage as it arrives, resulting in limited REST calls to the cloud storage with
added security.
How do you tackle the small file problem and skewness in delta lake?
Delta lake is packed with tons of optimizations to handle many know issues that we encounter with big data.
Most of the time to make sure that our cluster resources utilization and our aggregations are getting executed
optimally, it is necessary to make sure we partition our data.
But, Databricks recommends avoiding partitioning the data until necessary because delta lake takes care of the
partitions based on their cardinality using proprietary algorithms(auto-optimize).
Databricks performs multiple scans on partitions to check if the data can be compacted further based on small file size
to collate small files and avoid skewness in data.
delta.autoOptimize.autoCompact = true
delta.autoOptimize.optimizeWrite = true
We can manually perform optimization and use Zorder to index our files to merge small files into a manageable
partition.
OPTIMIZE test_table
ZORDER BY id
ZORDER is a mechanism used by delta lake data-skipping algorithms for scanning a set of files to Colocate column
information and compact files based on an index.
1. What is VACCUM?
Continuous data upserts get logged in the _delta_logs folder with a versioned copy.
SET spark.databricks.delta.retentionDurationCheck.enabled = false;
VACUUM test_table RETAIN 0 HOURS;
SET spark.databricks.delta.vacuum.logging.enabled = true;
VACCUM is useful for truncating files from the directory that are no longer valid after a period.
Default Vaccum period to 7 days. Overwrite this carefully in production to avoid data loss.
2. What is ZORDER?
ZORDER is a mechanism used by delta lake data-skipping algorithms for scanning a set of files to Colocate column
information and compact files based on an index.