Documentation - Parquet
Documentation - Parquet
Parquet
Documentation Release Community GitHub
Documentation
1: Overview
1.1: Motivation
2: Concepts
3: File Format
3.1: Configurations
3.2: Extensibility
3.3: Metadata
3.4: Types
3.4.1: Logical Types
3.5: Nested Encoding
3.6: Bloom Filter
3.7: Data Pages
3.7.1: Compression
3.7.2: Encodings
3.7.3: Parquet Modular Encryption
3.7.4: Checksumming
3.7.5: Column Chunks
3.7.6: Error Recovery
3.8: Nulls
3.9: Page Index
3.10: Implementation status
4: Developer Guide
4.1: Sub-Projects
4.2: Building Parquet
4.3: Contributing to Parquet-Java
4.4: Releasing Parquet
5: Resources
5.1: Blog Posts
5.2: Presentations
5.2.1: Spark Summit 2020
5.2.2: Hadoop Summit 2014
https://parquet.apache.org/docs/_print/ 1/75
25/10/24, 10:13 Documentation | Parquet
Welcome to the documentation for Apache Parquet. Here, you can find information about
the Parquet File Format, including specifications and developer resources.
1 - Overview
All about Parquet.
Apache Parquet is an open source, column-oriented data file format designed for efficient
data storage and retrieval. It provides high performance compression and encoding schemes
to handle complex data in bulk and is supported in many programming language and
analytics tools.
This documentation contains information about both the parquet-java and parquet-format
repositories.
parquet-format
The parquet-format repository hosts the official specification of the Apache Parquet file
format, defining how data is structured and stored. This specification, along with Thrift
metadata definitions and other crucial components, is essential for developers to effectively
read and write Parquet files. The parquet-format project specifically contains the format
specifications needed to understand and properly utilize Parquet files.
parquet-java
The parquet-java (formerly named ‘parquet-mr’) repository is part of the Apache Parquet
project and specifically focuses on providing Java tools for handling the Parquet file format.
Essentially, this repository includes all the necessary Java libraries and modules that allow
developers to read and write Apache Parquet files.
Included in parquet-java:
Java Implementation: It contains the core Java implementation of the Apache Parquet
format, making it possible to use Parquet files in Java applications.
Utilities and APIs: It provides various utilities and APIs for working with Apache Parquet
files, including tools for data import/export, schema management, and data conversion.
Parquet-java
Parquet C++, a subproject of Arrow C++ (documentation)
Parquet Go, a subproject for Arrow Go (documentation)
Parquet Rust
cuDF
Apache Impala
DuckDB
fastparquet, a Python implementation of the Apache Parquet format
https://parquet.apache.org/docs/_print/ 3/75
25/10/24, 10:13 Documentation | Parquet
1.1 - Motivation
We created Parquet to make the advantages of compressed, efficient columnar data
representation available to any project in the Hadoop ecosystem.
Parquet is built from the ground up with complex nested data structures in mind, and uses
the record shredding and assembly algorithm described in the Dremel paper. We believe this
approach is superior to simple flattening of nested name spaces.
Parquet is built to support very efficient compression and encoding schemes. Multiple
projects have demonstrated the performance impact of applying the right compression and
encoding scheme to the data. Parquet allows compression schemes to be specified on a per-
column level, and is future-proofed to allow adding more encodings as they are invented and
implemented.
Parquet is built to be used by anyone. The Hadoop ecosystem is rich with data processing
frameworks, and we are not interested in playing favorites. We believe that an efficient, well-
implemented columnar storage substrate should be useful to all frameworks without the
cost of extensive and difficult to set up dependencies.
https://parquet.apache.org/docs/_print/ 4/75
25/10/24, 10:13 Documentation | Parquet
2 - Concepts
Glossary of relevant terminology.
Block (HDFS block): This means a block in HDFS and the meaning is unchanged for
describing this file format. The file format is designed to work well on top of HDFS.
File: A HDFS file that must include the metadata for the file. It does not need to actually
contain the data.
Row group: A logical horizontal partitioning of the data into rows. There is no physical
structure that is guaranteed for a row group. A row group consists of a column chunk for
each column in the dataset.
Column chunk: A chunk of the data for a particular column. They live in a particular row
group and are guaranteed to be contiguous in the file.
Page: Column chunks are divided up into pages. A page is conceptually an indivisible unit
(in terms of compression and encoding). There can be multiple page types which are
interleaved in a column chunk.
Hierarchically, a file consists of one or more row groups. A row group contains exactly one
column chunk per column. Column chunks contain one or more pages.
Unit of parallelization
MapReduce - File/Row Group
IO - Column chunk
Encoding/Compression - Page
https://parquet.apache.org/docs/_print/ 5/75
25/10/24, 10:13 Documentation | Parquet
3 - File Format
Documentation about the Parquet File Format.
This file and the thrift definition should be read together to understand the format.
In the above example, there are N columns in this table, split into M row groups. The file
metadata contains the locations of all the column chunk start locations. More details on what
is contained in the metadata can be found in the Thrift definition.
File metadata is written after the data to allow for single pass writing.
Readers are expected to first read the file metadata to find all the column chunks they are
interested in. The columns chunks should then be read sequentially.
The format is explicitly designed to separate the metadata from the data. This allows splitting
columns into multiple files, as well as having a single metadata file reference multiple
parquet files.
https://parquet.apache.org/docs/_print/ 6/75
25/10/24, 10:13 Documentation | Parquet
https://parquet.apache.org/docs/_print/ 7/75
25/10/24, 10:13 Documentation | Parquet
3.1 - Configurations
Row Group Size
Larger row groups allow for larger column chunks which makes it possible to do larger
sequential IO. Larger groups also require more buffering in the write path (or a two pass
write). We recommend large row groups (512MB - 1GB). Since an entire row group might
need to be read, we want it to completely fit on one HDFS block. Therefore, HDFS block sizes
should also be set to be larger. An optimized read setup would be: 1GB row groups, 1GB
HDFS block size, 1 HDFS block per HDFS file.
https://parquet.apache.org/docs/_print/ 8/75
25/10/24, 10:13 Documentation | Parquet
3.2 - Extensibility
There are many places in the format for compatible extensions:
https://parquet.apache.org/docs/_print/ 9/75
25/10/24, 10:13 Documentation | Parquet
3.3 - Metadata
There are two types of metadata: file metadata, and page header metadata. In the diagram
below, file metadata is described by the FileMetaData structure. This file metadata provides
offset and size information useful when navigating the Parquet file. Page header metadata
( PageHeader and children in the diagram) is stored in-line with the page data, and is used in
the reading and decoding of said data.
All thrift structures are serialized using the TCompactProtocol. The full definition of these
structures is given in the Parquet Thrift definition.
https://parquet.apache.org/docs/_print/ 10/75
25/10/24, 10:13 Documentation | Parquet
3.4 - Types
The types supported by the file format are intended to be as minimal as possible, with a
focus on how the types effect on disk storage. For example, 16-bit ints are not explicitly
supported in the storage format since they are covered by 32-bit ints with an efficient
encoding. This reduces the complexity of implementing readers and writers for the format.
The types are:
https://parquet.apache.org/docs/_print/ 11/75
25/10/24, 10:13 Documentation | Parquet
https://parquet.apache.org/docs/_print/ 12/75
25/10/24, 10:13 Documentation | Parquet
Two encodings for the levels are supported BIT_PACKED and RLE. Only RLE is now used as it
supersedes BIT_PACKED.
https://parquet.apache.org/docs/_print/ 13/75
25/10/24, 10:13 Documentation | Parquet
A Bloom filter is a compact data structure that overapproximates a set. It can respond to
membership queries with either “definitely no” or “probably yes”, where the probability of
false positives is configured when the filter is initialized. Bloom filters do not have false
negatives.
Because Bloom filters are small compared to dictionaries, they can be used for predicate
pushdown even in columns with high cardinality and when space is at a premium.
Goal
Enable predicate pushdown for high-cardinality columns while using less space than
dictionaries.
Induce no additional I/O overhead when executing queries on columns without Bloom
filters attached or when executing non-selective queries.
Technical Approach
The section describes split block Bloom filters, which is the first (and, at time of writing, only)
Bloom filter representation supported in Parquet.
First we will describe a “block”. This is the main component split block Bloom filters are
composed of.
Each block is 256 bits, broken up into eight contiguous “words”, each consisting of 32 bits.
Each word is thought of as an array of bits; each bit is either “set” or “not set”.
When initialized, a block is “empty”, which means each of the eight component words has no
bits set. In addition to initialization, a block supports two other operations: block_insert and
block_check . Both take a single unsigned 32-bit integer as input; block_insert returns no
value, but modifies the block, while block_check returns a boolean. The semantics of
block_check are that it must return true if block_insert was previously called on the block
https://parquet.apache.org/docs/_print/ 14/75
25/10/24, 10:13 Documentation | Parquet
with the same argument, and otherwise it returns false with high probability. For more
details of the probability, see below.
The operations block_insert and block_check depend on some auxiliary artifacts. First,
there is a sequence of eight odd unsigned 32-bit integer constants called the salt . Second,
there is a method called mask that takes as its argument a single unsigned 32-bit integer and
returns a block in which each word has exactly one bit set.
Since there are eight words in the block and eight integers in the salt, there is a
correspondence between them. To set a bit in the nth word of the block, mask first multiplies
its argument by the nth integer in the salt , keeping only the least significant 32 bits of the
64-bit product, then divides that 32-bit unsigned integer by 2 to the 27th power, denoted
above using the C language’s right shift operator “ >> ”. The resulting integer is between 0 and
31, inclusive. That integer is the bit that gets set in the word in the block.
From the mask operation, block_insert is defined as setting every bit in the block that was
also set in the result from mask. Similarly, block_check returns true when every bit that is
set in the result of mask is also set in the block.
https://parquet.apache.org/docs/_print/ 15/75
25/10/24, 10:13 Documentation | Parquet
for i in [0..7] {
for j in [0..31] {
if (masked.getWord(i).isSet(j)) {
if (not b.getWord(i).setBit(j)) {
return false
}
}
}
}
return true
}
The reader will note that a block, as defined here, is actually a special kind of Bloom filter.
Specifically it is a “split” Bloom filter, as described in section 2.1 of Network Applications of
Bloom Filters: A Survey. The use of multiplication by an odd constant and then shifting right is
a method of hashing integers as described in section 2.2 of Dietzfelbinger et al.’s A reliable
randomized algorithm for the closest-pair problem.
Now that a block is defined, we can describe Parquet’s split block Bloom filters. A split block
Bloom filter (henceforth “SBBF”) is composed of z blocks, where z is greater than or equal
to one and less than 2 to the 31st power. When an SBBF is initialized, each block in it is
initialized, which means each bit in each word in each block in the SBBF is unset.
The filter_insert operation first uses the most significant 32 bits of its argument to select a
block to operate on. Call the argument “ h ”, and recall the use of “ z ” to mean the number of
blocks. Then a block number i between 0 and z-1 (inclusive) to operate on is chosen as
follows:
unsigned int64 h_top_bits = h >> 32;
unsigned int64 z_as_64_bit = z;
unsigned int32 i = (h_top_bits * z_as_64_bit) >> 32;
The first line extracts the most significant 32 bits from h and assigns them to a 64-bit
unsigned integer. The second line is simpler: it just sets an unsigned 64-bit value to the same
value as the 32-bit unsigned value z . The purpose of having both h_top_bits and
z_as_64_bit be 64-bit values is so that their product is a 64-bit value. That product is taken in
the third line, and then the most significant 32 bits are extracted into the value i , which is
the index of the block that will be operated on.
https://parquet.apache.org/docs/_print/ 16/75
25/10/24, 10:13 Documentation | Parquet
After this process to select i , filter_insert uses the least significant 32 bits of h as the
argument to block_insert called on block i .
The technique for converting the most significant 32 bits to an integer between 0 and z-1
(inclusive) avoids using the modulo operation, which is often very slow. This trick can be
found in Kenneth A. Ross’s 2006 IBM research report, “Efficient Hash Probes on Modern
Processors”
The filter_check operation uses the same method as filter_insert to select a block to
operate on, then uses the least significant 32 bits of its argument as an argument to
block_check called on that block, returning the result.
In the pseudocode below, the modulus operator is represented with the C language’s “ % ”
operator. The “ >> ” operator is used to denote the conversion of an unsigned 64-bit integer
to an unsigned 32-bit integer containing only the most significant 32 bits, and C’s cast
operator “ (unsigned int32) ” is used to denote the conversion of an unsigned 64-bit integer
to an unsigned 32-bit integer containing only the least significant 32 bits.
The use of blocks is from Putze et al.’s Cache-, Hash- and Space-Efficient Bloom filters
To use an SBBF for values of arbitrary Parquet types, we apply a hash function to that value -
at the time of writing, xxHash, using the function XXH64 with a seed of 0 and following the
specification version 0.1.1.
Sizing an SBBF
The check operation in SBBFs can return true for an argument that was never inserted into
the SBBF. These are called “false positives”. The “false positive probability” is the probability
that any given hash value that was never insert ed into the SBBF will cause check to return
true (a false positive). There is not a simple closed-form calculation of this probability, but
here is an example:
https://parquet.apache.org/docs/_print/ 17/75
25/10/24, 10:13 Documentation | Parquet
A filter that uses 1024 blocks and has had 26,214 hash values insert ed will have a false
positive probability of around 1.26%. Each of those 1024 blocks occupies 256 bits of space, so
the total space usage is 262,144. That means that the ratio of bits of space to hash values is
10-to-1. Adding more hash values increases the denominator and lowers the ratio, which
increases the false positive probability. For instance, inserting twice as many hash values
(52,428) decreases the ratio of bits of space per hash value inserted to 5-to-1 and increases
the false positive probability to 18%. Inserting half as many hash values (13,107) increases
the ratio of bits of space per hash value inserted to 20-to-1 and decreases the false positive
probability to 0.04%.
Here are some sample values of the ratios needed to achieve certain false positive rates:
6.0 10 %
10.5 1%
16.9 0.1 %
26.4 0.01 %
41 0.001 %
File Format
Each multi-block Bloom filter is required to work for only one column chunk. The data of a
multi-block bloom filter consists of the bloom filter header followed by the bloom filter bitset.
The bloom filter header encodes the size of the bloom filter bit set in bytes that is used to
read the bitset.
/** Hash strategy type annotation. xxHash is an extremely fast non-cryptographic hash
* algorithm. It uses 64 bits version of xxHash.
**/
struct XxHash {}
https://parquet.apache.org/docs/_print/ 18/75
25/10/24, 10:13 Documentation | Parquet
/**
* The hash function used in Bloom filter. This function takes the hash of a column value
* using plain encoding.
**/
union BloomFilterHash {
/** xxHash Strategy. **/
1: XxHash XXHASH;
}
/**
* The compression used in the Bloom filter.
**/
struct Uncompressed {}
union BloomFilterCompression {
1: Uncompressed UNCOMPRESSED;
}
/**
* Bloom filter header is stored at beginning of Bloom filter data of each column
* and followed by its bitset.
**/
struct BloomFilterPageHeader {
/** The size of bitset in bytes **/
1: required i32 numBytes;
/** The algorithm for setting bits. **/
2: required BloomFilterAlgorithm algorithm;
/** The hash function used for Bloom filter. **/
3: required BloomFilterHash hash;
/** The compression used in the Bloom filter **/
4: required BloomFilterCompression compression;
}
struct ColumnMetaData {
...
/** Byte offset from beginning of file to Bloom filter data. **/
14: optional i64 bloom_filter_offset;
}
The Bloom filters are grouped by row group and with data for each column in the same order
as the file schema. The Bloom filter data can be stored before the page indexes after all row
https://parquet.apache.org/docs/_print/ 19/75
25/10/24, 10:13 Documentation | Parquet
https://parquet.apache.org/docs/_print/ 20/75
25/10/24, 10:13 Documentation | Parquet
Or it can be stored between row groups, the file layout looks like:
Encryption
In the case of columns with sensitive data, the Bloom filter exposes a subset of sensitive
information such as the presence of value. Therefore the Bloom filter of columns with
sensitive data should be encrypted with the column key, and the Bloom filter of other (not
sensitive) columns do not need to be encrypted.
Bloom filters have two serializable modules - the PageHeader thrift structure (with its internal
fields, including the BloomFilterPageHeader bloom_filter_page_header ), and the Bitset. The
header structure is serialized by Thrift, and written to file output stream; it is followed by the
serialized Bitset.
For Bloom filters in sensitive columns, each of the two modules will be encrypted after
serialization, and then written to the file. The encryption will be performed using the AES
GCM cipher, with the same column key, but with different AAD module types - “BloomFilter
Header” (8) and “BloomFilter Bitset” (9). The length of the encrypted buffer is written before
the buffer, as described in the Parquet encryption specification.
https://parquet.apache.org/docs/_print/ 21/75
25/10/24, 10:13 Documentation | Parquet
The value of uncompressed_page_size specified in the header is for all the 3 pieces combined.
The encoded values for the data page is always required. The definition and repetition levels
are optional, based on the schema definition. If the column is not nested (i.e. the path to the
column has length 1), we do not encode the repetition levels (it would always have the value
1). For data that is required, the definition levels are skipped (if encoded, it will always have
the value of the max definition level).
For example, in the case where the column is non-nested and required, the data in the page
is only the encoded values.
https://parquet.apache.org/docs/_print/ 22/75
25/10/24, 10:13 Documentation | Parquet
3.7.1 - Compression
Overview
Parquet allows the data block inside dictionary pages and data pages to be compressed for
better space efficiency. The Parquet format supports several compression covering different
areas in the compression ratio / processing cost spectrum.
For all compression codecs except the deprecated LZ4 codec, the raw data of a (data or
dictionary) page is fed as-is to the underlying compression library, without any additional
framing or padding. The information required for precise allocation of compressed and
decompressed buffers is written in the PageHeader struct.
Codecs
UNCOMPRESSED
No-op codec. Data is left uncompressed.
SNAPPY
A codec based on the Snappy compression format. If any ambiguity arises when
implementing this format, the implementation provided by Google Snappy library is
authoritative.
GZIP
A codec based on the GZIP format (not the closely-related “zlib” or “deflate” formats) defined
by RFC 1952. If any ambiguity arises when implementing this format, the implementation
provided by the zlib compression library is authoritative.
Readers should support reading pages containing multiple GZIP members, however, as this
has historically not been supported by all implementations, it is recommended that writers
refrain from creating such pages by default for better interoperability.
LZO
A codec based on or interoperable with the LZO compression library.
https://parquet.apache.org/docs/_print/ 23/75
25/10/24, 10:13 Documentation | Parquet
BROTLI
A codec based on the Brotli format defined by RFC 7932. If any ambiguity arises when
implementing this format, the implementation provided by the Brotli compression library is
authoritative.
LZ4
A deprecated codec loosely based on the LZ4 compression algorithm, but with an additional
undocumented framing scheme. The framing is part of the original Hadoop compression
library and was historically copied first in parquet-mr, then emulated with mixed results by
parquet-cpp.
ZSTD
A codec based on the Zstandard format defined by RFC 8478. If any ambiguity arises when
implementing this format, the implementation provided by the Zstandard compression
library is authoritative.
LZ4_RAW
A codec based on the LZ4 block format. If any ambiguity arises when implementing this
format, the implementation provided by the LZ4 compression library is authoritative.
https://parquet.apache.org/docs/_print/ 24/75
25/10/24, 10:13 Documentation | Parquet
3.7.2 - Encodings
Plain: (PLAIN = 0)
Supported Types: all
This is the plain encoding that must be supported for types. It is intended to be the simplest
encoding. Values are encoded back to back.
The plain encoding is used whenever a more efficient encoding can not be used. It stores the
data in the following format:
For native types, this outputs the data as little endian. Floating point types are encoded in
IEEE.
For the byte array type, it encodes the length as a 4 byte little endian, followed by the bytes.
Dictionary page format: the entries in the dictionary using the plain encoding.
Data page format: the bit width used to encode the entry ids stored as 1 byte (max bit width
= 32), followed by the values encoded using RLE/Bit packed described above (with the given
bit width).
Using the PLAIN_DICTIONARY enum value is deprecated in the Parquet 2.0 specification.
Prefer using RLE_DICTIONARY in a data page and PLAIN in a dictionary page for Parquet 2.0+
files.
https://parquet.apache.org/docs/_print/ 25/75
25/10/24, 10:13 Documentation | Parquet
The grammar for this encoding looks like this, given a fixed bit-width known in advance:
1. The bit-packing here is done in a different order than the one in the deprecated bit-
packing encoding. The values are packed from the least significant bit of each byte to the
most significant bit, though the order of the bits in each value remains in the usual order
of most significant to least significant. For example, to pack the same values as the
example in the deprecated encoding above:
dec value: 0 1 2 3 4 5 6 7
bit value: 000 001 010 011 100 101 110 111
bit label: ABC DEF GHI JKL MNO PQR STU VWX
would be encoded like this where spaces mark byte boundaries (3 bytes):
The reason for this packing order is to have fewer word-boundaries on little-endian
hardware when deserializing more than one byte at at time. This is because 4 bytes can
be read into a 32 bit register (or 8 bytes into a 64 bit register) and values can be
unpacked just by shifting and ORing with a mask. (to make this optimization work on a
https://parquet.apache.org/docs/_print/ 26/75
25/10/24, 10:13 Documentation | Parquet
big-endian machine, you would have to use the ordering used in the deprecated bit-
packing encoding)
2. varint-encode() is ULEB-128 encoding, see https://en.wikipedia.org/wiki/LEB128
3. bit-packed-run-len and rle-run-len must be in the range [1, 231 - 1]. This means that a
Parquet implementation can always store the run length in a signed 32-bit integer. This
length restriction was not part of the Parquet 2.5.0 and earlier specifications, but longer
runs were not readable by the most common Parquet implementations so, in practice,
were not safe for Parquet writers to emit.
Note that the RLE encoding method is only supported for the following types of data:
Whether prepending the four-byte length to the encoded-data is summarized as the table
below:
+--------------+------------------------+-----------------+
| Page kind | RLE-encoded data kind | Prepend length? |
+--------------+------------------------+-----------------+
| Data page v1 | Definition levels | Y |
| | Repetition levels | Y |
| | Dictionary indices | N |
| | Boolean values | Y |
+--------------+------------------------+-----------------+
| Data page v2 | Definition levels | N |
| | Repetition levels | N |
| | Dictionary indices | N |
| | Boolean values | Y |
+--------------+------------------------+-----------------+
https://parquet.apache.org/docs/_print/ 27/75
25/10/24, 10:13 Documentation | Parquet
dec value: 0 1 2 3 4 5 6 7
bit value: 000 001 010 011 100 101 110 111
bit label: ABC DEF GHI JKL MNO PQR STU VWX
would be encoded like this where spaces mark byte boundaries (3 bytes):
Note that the BIT_PACKED encoding method is only supported for encoding repetition and
definition levels.
This encoding is adapted from the Binary packing described in “Decoding billions of integers
per second through vectorization” by D. Lemire and L. Boytsov.
In delta encoding we make use of variable length integers for storing various numbers (not
the deltas themselves). For unsigned values, we use ULEB128, which is the unsigned version
of LEB128 (https://en.wikipedia.org/wiki/LEB128#Unsigned_LEB128). For signed values, we
use zigzag encoding (https://developers.google.com/protocol-buffers/docs/encoding#signed-
integers) to map negative values to positive ones and apply ULEB128 on the result.
Delta encoding consists of a header followed by blocks of delta encoded values binary
packed. Each block is made of miniblocks, each of them binary packed with its own bit width.
<block size in values> <number of miniblocks in a block> <total value count> <first value
https://parquet.apache.org/docs/_print/ 28/75
25/10/24, 10:13 Documentation | Parquet
the min delta is a zigzag ULEB128 int (we compute a minimum as we need positive
integers for bit packing)
the bitwidth of each block is stored as a byte
each miniblock is a list of bit packed ints according to the bit width stored at the
beginning of the block
1. Compute the differences between consecutive elements. For the first element in the
block, use the last element in the previous block or, in the case of the first block, use the
first value of the whole sequence, stored in the header.
2. Compute the frame of reference (the minimum of the deltas in the block). Subtract this
min delta from all deltas in the block. This guarantees that all values are non-negative.
3. Encode the frame of reference (min delta) as a zigzag ULEB128 int followed by the bit
widths of the miniblocks and the delta values (minus the min delta) bit-packed per
miniblock.
Having multiple blocks allows us to adapt to changes in the data by changing the frame of
reference (the min delta) which can result in smaller values after the subtraction which,
again, means we can store them with a lower bit width.
If there are not enough values to fill the last miniblock, we pad the miniblock so that its
length is always the number of values in a full miniblock multiplied by the bit width. The
values of the padding bits should be zero, but readers must accept paddings consisting of
arbitrary bits as well.
If, in the last block, less than <number of miniblocks in a block> miniblocks are needed to
store the values, the bytes storing the bit widths of the unneeded miniblocks are still present,
their value should be zero, but readers must accept arbitrary values as well. There are no
additional padding bytes for the miniblock bodies though, as if their bit widths were 0
(regardless of the actual byte values). The reader knows when to stop reading by keeping
track of the number of values read.
Subtractions in steps 1) and 2) may incur signed arithmetic overflow, and so will the
corresponding additions when decoding. Overflow should be allowed and handled as
wrapping around in 2’s complement notation so that the original values are correctly
restituted. This may require explicit care in some programming languages (for example by
doing all arithmetic in the unsigned domain).
The following examples use 8 as the block size to keep the examples short, but in real cases it
would be invalid.
https://parquet.apache.org/docs/_print/ 29/75
25/10/24, 10:13 Documentation | Parquet
Example 1
1, 2, 3, 4, 5
1, 1, 1, 1
The minimum delta is 1 and after step 2, the relative deltas become:
0, 0, 0, 0
Example 2
7, 5, 3, 1, 2, 3, 4, 5, the deltas would be
0, 0, 0, 3, 3, 3, 3
Characteristics
This encoding is similar to the RLE/bit-packing encoding. However the RLE/bit-packing
encoding is specifically used when the range of ints is small over the entire page, as is true of
repetition and definition levels. It uses a single bit width for the whole page. The delta
encoding algorithm described above stores a bit width per miniblock and is less sensitive to
variations in the size of encoded integers. It is also somewhat doing RLE encoding as a block
containing all the same values will be bit packed to a zero bit width thus being only a header.
This encoding is always preferred over PLAIN for byte array columns.
https://parquet.apache.org/docs/_print/ 30/75
25/10/24, 10:13 Documentation | Parquet
For this encoding, we will take all the byte array lengths and encode them using delta
encoding (DELTA_BINARY_PACKED). The byte array data follows all of the length data just
concatenated back to back. The expected savings is from the cost of encoding the lengths
and possibly better compression in the data (it is no longer interleaved with the lengths).
This is also known as incremental encoding or front compression: for each element in a
sequence of strings, store the prefix length of the previous entry plus the suffix.
Note that, even for FIXED_LEN_BYTE_ARRAY, all lengths are encoded despite the redundancy.
This encoding does not reduce the size of the data but can lead to a significantly better
compression ratio and speed when a compression algorithm is used afterwards.
https://parquet.apache.org/docs/_print/ 31/75
25/10/24, 10:13 Documentation | Parquet
This encoding creates K byte-streams of length N where K is the size in bytes of the data type
and N is the number of elements in the data sequence. Specifically, K is 4 for FLOAT type and
8 for DOUBLE type. The bytes of each value are scattered to the corresponding streams. The
0-th byte goes to the 0-th stream, the 1-st byte goes to the 1-st stream and so on. The
streams are concatenated in the following order: 0-th stream, 1-st stream, etc. The total
length of encoded streams is K * N bytes. Because it does not have any metadata to indicate
the total length, the end of the streams is also the end of data page. No padding is allowed
inside the data page.
Example: Original data is three 32-bit floats and for simplicity we look at their raw
representation.
After applying the transformation, the data has the following representation:
Bytes AA 00 A3 BB 11 B4 CC 22 C5 DD 33 D6
https://parquet.apache.org/docs/_print/ 32/75
25/10/24, 10:13 Documentation | Parquet
1 Problem Statement
Existing data protection solutions (such as flat encryption of files, in-storage encryption, or
use of an encrypting storage client) can be applied to Parquet files, but have various security
or performance issues. An encryption mechanism, integrated in the Parquet format, allows
for an optimal combination of data security, processing speed and encryption granularity.
2 Goals
1. Protect Parquet data and metadata by encryption, while enabling selective reads
(columnar projection, predicate push-down).
2. Implement “client-side” encryption/decryption (storage client). The storage server must
not see plaintext data, metadata or encryption keys.
3. Leverage authenticated encryption that allows clients to check integrity of the retrieved
data - making sure the file (or file parts) have not been replaced with a wrong version, or
tampered with otherwise.
4. Enable different encryption keys for different columns and for the footer.
5. Allow for partial encryption - encrypt only column(s) with sensitive data.
6. Work with all compression and encoding mechanisms supported in Parquet.
7. Support multiple encryption algorithms, to account for different security and
performance requirements.
8. Enable two modes for metadata protection -
full protection of file metadata
partial protection of file metadata that allows legacy readers to access unencrypted
columns in an encrypted file.
9. Minimize overhead of encryption - in terms of size of encrypted files, and throughput of
write/read operations.
3 Technical Approach
https://parquet.apache.org/docs/_print/ 33/75
25/10/24, 10:13 Documentation | Parquet
Parquet files are comprised of separately serialized components: pages, page headers,
column indexes, offset indexes, bloom filter headers and bitsets, the footer. Parquet
encryption mechanism denotes them as “modules” and encrypts each module separately –
making it possible to fetch and decrypt the footer, find the offset of required pages, fetch the
pages and decrypt the data. In this document, the term “footer” always refers to the regular
Parquet footer - the FileMetaData structure, and its nested fields (row groups / column
chunks).
File encryption is flexible - each column and the footer can be encrypted with the same key,
with a different key, or not encrypted at all.
The results of compression of column pages are encrypted before being written to the
output stream. A new Thrift structure, with column crypto metadata, is added to column
chunks of the encrypted columns. This metadata provides information about the column
encryption keys.
The results of serialization of Thrift structures are encrypted, before being written to the
output stream.
The file footer can be either encrypted or left as a plaintext. In an encrypted footer mode, a
new Thrift structure with file crypto metadata is added to the file. This metadata provides
information about the file encryption algorithm and the footer encryption key.
In a plaintext footer mode, the contents of the footer structure is visible and signed in order
to verify its integrity. New footer fields keep an information about the file encryption
algorithm and the footer signing key.
For encrypted columns, the following modules are always encrypted, with the same column
key: pages and page headers (both dictionary and data), column indexes, offset indexes,
bloom filter headers and bitsets. If the column key is different from the footer encryption
key, the column metadata is serialized separately and encrypted with the column key. In this
case, the column metadata is also considered to be a module.
Initially, two algorithms have been implemented, one based on a GCM mode of AES, and the
other on a combination of GCM and CTR modes.
https://parquet.apache.org/docs/_print/ 34/75
25/10/24, 10:13 Documentation | Parquet
Parquet encryption uses the RBG-based (random bit generator) nonce construction as
defined in the section 8.2.2 of the NIST SP 800-38D document. For each encrypted module,
Parquet generates a unique nonce with a length of 12 bytes (96 bits). Notice: the NIST
specification uses a term “IV” for what is called “nonce” in the Parquet encryption design.
4.2.1 AES_GCM_V1
This Parquet algorithm encrypts all modules by the GCM cipher, without padding. The AES
GCM cipher must be implemented by a cryptographic provider according to the NIST SP 800-
38D specification.
https://parquet.apache.org/docs/_print/ 35/75
25/10/24, 10:13 Documentation | Parquet
In Parquet, an input to the GCM cipher is an encryption key, a 12-byte nonce, a plaintext and
an AAD. The output is a ciphertext with the length equal to that of plaintext, and a 16-byte
authentication tag used to verify the ciphertext and AAD integrity.
4.2.2 AES_GCM_CTR_V1
In this Parquet algorithm, all modules except pages are encrypted with the GCM cipher, as
described above. The pages are encrypted by the CTR cipher without padding. This allows to
encrypt/decrypt the bulk of the data faster, while still verifying the metadata integrity and
making sure the file has not been replaced with a wrong version. However, tampering with
the page data might go unnoticed. The AES CTR cipher must be implemented by a
cryptographic provider according to the NIST SP 800-38A specification.
In Parquet, an input to the CTR cipher is an encryption key, a 16-byte IV and a plaintext. IVs
are comprised of a 12-byte nonce and a 4-byte initial counter field. The first 31 bits of the
initial counter field are set to 0, the last bit is set to 1. The output is a ciphertext with the
length equal to that of plaintext.
String ID of a Data key. This enables direct retrieval of the Data key from a KMS.
Encrypted Data key, and string ID of a Master key. The Data key is generated randomly
and encrypted with a Master key either remotely in a KMS, or locally after retrieving the
Master key from a KMS. Master key rotation requires modification of the data file footer.
Short ID (counter) of a Data key inside the Parquet data file. The Data key is encrypted
with a Master key using one of the options described above – but the resulting key
https://parquet.apache.org/docs/_print/ 36/75
25/10/24, 10:13 Documentation | Parquet
material is stored separately, outside the data file, and will be retrieved using the
counter and file path. Master key rotation doesn’t require modification of the data file.
Key metadata can also be empty - in a case the encryption keys are fully managed by the
caller code, and passed explicitly to Parquet readers for the file footer and each encrypted
column.
Parquet constructs a module AAD from two components: an optional AAD prefix - a string
provided by the user for the file, and an AAD suffix, built internally for each GCM-encrypted
module inside the file. The AAD prefix should reflect the target identity that helps to detect
file swapping (a simple example - table name with a date and partition, e.g.
“employees_23May2018.part0”). The AAD suffix reflects the internal identity of modules
inside the file, which for example prevents replacement of column pages in row group 0 by
pages from the same column in row group 1. The module AAD is a direct concatenation of
the prefix and suffix parts.
The protection against swapping full files is optional. It is not enabled by default because it
requires the writers to generate and pass an AAD prefix.
A reader of a file created with an AAD prefix, should be able to verify the prefix (file identity)
by comparing it with e.g. the target table name, using a convention accepted in the
organization. Readers of data sets, comprised of multiple partition files, can verify data set
integrity by checking the number of files and the AAD prefix of each file. For example, a
https://parquet.apache.org/docs/_print/ 37/75
25/10/24, 10:13 Documentation | Parquet
reader that needs to process the employee table, a May 23 version, knows (via the
convention) that the AAD prefix must be “employees_23May2018.partN” in each
corresponding table file. If a file AAD prefix is “employees_23May2018.part0”, the reader will
know it is fine, but if the prefix is “employees_23May2016.part0” or
“contractors_23May2018.part0” - the file is wrong. The reader should also know the number
of table partitions and verify availability of all partition files (prefixes) from 0 to N-1.
Unlike AAD prefix, a suffix is built internally by Parquet, by direct concatenation of the
following parts:
1. [All modules] internal file identifier - a random byte array generated for each file
(implementation-defined length)
2. [All modules] module type (1 byte)
3. [All modules except footer] row group ordinal (2 byte short, little endian)
4. [All modules except footer] column ordinal (2 byte short, little endian)
5. [Data page and header only] page ordinal (2 byte short, little endian)
Footer (0)
ColumnMetaData (1)
Data Page (2)
Dictionary Page (3)
Data PageHeader (4)
Dictionary PageHeader (5)
ColumnIndex (6)
OffsetIndex (7)
BloomFilter Header (8)
BloomFilter Bitset (9)
https://parquet.apache.org/docs/_print/ 38/75
25/10/24, 10:13 Documentation | Parquet
5 File Format
5.1 Encrypted module serialization
All modules, except column pages, are encrypted with the GCM cipher. In the AES_GCM_V1
algorithm, the column pages are also encrypted with AES GCM. For each module, the GCM
encryption buffer is comprised of a nonce, ciphertext and tag, described in the Algorithms
section. The length of the encryption buffer (a 4-byte little endian) is written to the output
stream, followed by the buffer itself.
length (4 bytes) nonce (12 bytes) ciphertext (length-28 bytes) tag (16 bytes)
In the AES_GCM_CTR_V1 algorithm, the column pages are encrypted with AES CTR. For each
page, the CTR encryption buffer is comprised of a nonce and ciphertext, described in the
Algorithms section. The length of the encryption buffer (a 4-byte little endian) is written to the
output stream, followed by the buffer itself.
https://parquet.apache.org/docs/_print/ 39/75
25/10/24, 10:13 Documentation | Parquet
struct AesGcmV1 {
/** AAD prefix **/
1: optional binary aad_prefix
struct AesGcmCtrV1 {
/** AAD prefix **/
1: optional binary aad_prefix
union EncryptionAlgorithm {
1: AesGcmV1 AES_GCM_V1
2: AesGcmCtrV1 AES_GCM_CTR_V1
}
If a writer provides an AAD prefix, it will be used for enciphering the file and stored in the
aad_prefix field. However, the writer can request Parquet not to store the prefix in the file.
In this case, the aad_prefix field will not be set, and the supply_aad_prefix field will be set to
true to inform readers they must supply the AAD prefix for this file in order to be able to
decrypt it.
The row group ordinal, required for AAD suffix calculation, is set in the RowGroup structure:
struct RowGroup {
...
/** Row group ordinal in the file **/
7: optional i16 ordinal
}
https://parquet.apache.org/docs/_print/ 40/75
25/10/24, 10:13 Documentation | Parquet
struct EncryptionWithFooterKey {
}
struct EncryptionWithColumnKey {
/** Column path in schema **/
1: required list<string> path_in_schema
union ColumnCryptoMetaData {
1: EncryptionWithFooterKey ENCRYPTION_WITH_FOOTER_KEY
2: EncryptionWithColumnKey ENCRYPTION_WITH_COLUMN_KEY
}
struct ColumnChunk {
...
/** Crypto metadata of encrypted columns **/
8: optional ColumnCryptoMetaData crypto_metadata
}
struct ColumnChunk {
...
https://parquet.apache.org/docs/_print/ 41/75
25/10/24, 10:13 Documentation | Parquet
The columns encrypted with the same key as the footer must leave the column metadata at
the original location, optional ColumnMetaData meta_data in the ColumnChunk structure. This
field is not set for columns encrypted with a column-specific key - instead, the
ColumnMetaData is Thrift-serialized, encrypted with the column key and written to the
encrypted_column_metadata field in the ColumnChunk structure, as described in the section 5.3.
/** Crypto metadata for files with encrypted footer **/
struct FileCryptoMetaData {
/**
* Encryption algorithm. This field is only used for files
* with encrypted footer. Files with plaintext footer store algorithm id
* inside footer (FileMetaData structure).
*/
1: required EncryptionAlgorithm encryption_algorithm
The plaintext footer mode can be useful during a transitional period in organizations where
some frameworks can’t be upgraded to a new Parquet library for a while. Data writers will
upgrade and run with a new Parquet version, producing encrypted files in this mode. Data
readers working with sensitive data will also upgrade to a new Parquet library. But other
readers that don’t need the sensitive columns, can continue working with an older Parquet
version. They will be able to access plaintext columns in encrypted files. A legacy reader,
trying to access a sensitive column data in an encrypted file with a plaintext footer, will get an
exception. More specifically, a Thrift parsing exception on an encrypted page header
structure. Again, using legacy Parquet readers for encrypted files is a temporary solution.
In the plaintext footer mode, the optional ColumnMetaData meta_data is set in the ColumnChunk
structure for all columns, but is stripped of the statistics for the sensitive (encrypted)
columns. These statistics are available for new readers with the column key - they decrypt
the encrypted_column_metadata field, described in the section 5.3, and parse it to get statistics
and all other column metadata values. The legacy readers are not aware of the encrypted
metadata field; they parse the regular (plaintext) field as usual. While they can’t read the data
of encrypted columns, they read their metadata to extract the offset and size of encrypted
column data, required for column chunk vectorization.
https://parquet.apache.org/docs/_print/ 43/75
25/10/24, 10:13 Documentation | Parquet
The plaintext footer is signed in order to prevent tampering with the FileMetaData contents.
The footer signing is done by encrypting the serialized FileMetaData structure with the AES
GCM algorithm - using a footer signing key, and an AAD constructed according to the
instructions of the section 4.4. Only the nonce and GCM tag are stored in the file – as a 28-
byte fixed-length array, written right after the footer itself. The ciphertext is not stored,
because it is not required for footer integrity verification by readers.
The plaintext footer mode sets the following fields in the the FileMetaData structure:
struct FileMetaData {
...
/**
* Encryption algorithm. This field is set only in encrypted files
* with plaintext footer. Files with encrypted footer store algorithm id
* in FileCryptoMetaData structure.
*/
8: optional EncryptionAlgorithm encryption_algorithm
/**
* Retrieval metadata of key used for signing the footer.
* Used only in encrypted files with plaintext footer.
*/
9: optional binary footer_signing_key_metadata
}
The FileMetaData structure is Thrift-serialized and written to the output stream. The 28-byte
footer signature is written after the plaintext footer, followed by a 4-byte little endian integer
that contains the combined length of the footer and its signature. A final magic string, “PAR1”,
is written at the end of the file. The same magic string is written at the beginning of the file
(offset 0). The magic bytes for plaintext footer mode are ‘PAR1’ to allow legacy readers to
read projections of the file that do not include encrypted columns.
https://parquet.apache.org/docs/_print/ 44/75
25/10/24, 10:13 Documentation | Parquet
6. Encryption Overhead
The size overhead of Parquet modular encryption is negligible, since most of the encryption
operations are performed on pages (the minimal unit of Parquet data storage and
compression). The overhead order of magnitude is adding 1 byte per each ~30,000 bytes of
original data - calculated by comparing the page encryption overhead (nonce + tag + length =
32 bytes) to the default page size (1 MB). This is a rough estimation, and can change with the
encryption algorithm (no 16-byte tag in AES_GCM_CTR_V1) and with page configuration or
data encoding/compression.
https://parquet.apache.org/docs/_print/ 45/75
25/10/24, 10:13 Documentation | Parquet
3.7.4 - Checksumming
Pages of all kinds can be individually checksummed. This allows disabling of checksums at
the HDFS file level, to better support single row lookups. Checksums are calculated using the
standard CRC32 algorithm - as used in e.g. GZip - on the serialized binary representation of a
page (not including the page header itself).
https://parquet.apache.org/docs/_print/ 46/75
25/10/24, 10:13 Documentation | Parquet
A column chunk might be partly or completely dictionary encoded. It means that dictionary
indexes are saved in the data pages instead of the actual values. The actual values are stored
in the dictionary page. See details in Encodings.md. The dictionary page must be placed at
the first position of the column chunk. At most one dictionary page can be placed in a column
chunk.
Additionally, files can contain an optional column index to allow readers to skip pages more
efficiently. See PageIndex.md for details and the reasoning behind adding these to the
format.
https://parquet.apache.org/docs/_print/ 47/75
25/10/24, 10:13 Documentation | Parquet
Potential extension: With smaller row groups, the biggest issue is placing the file metadata at
the end. If an error happens while writing the file metadata, all the data written will be
unreadable. This can be fixed by writing the file metadata every Nth row group. Each file
metadata would be cumulative and include all the row groups written so far. Combining this
with the strategy used for rc or avro files using sync markers, a reader could recover partially
written files.
https://parquet.apache.org/docs/_print/ 48/75
25/10/24, 10:13 Documentation | Parquet
3.8 - Nulls
Nullity is encoded in the definition levels (which is run-length encoded). NULL values are not
encoded in the data. For example, in a non-nested schema, a column with 1000 NULLs would
be encoded with run-length encoding (0, 1000 times) for the definition levels and nothing
else.
https://parquet.apache.org/docs/_print/ 49/75
25/10/24, 10:13 Documentation | Parquet
Problem Statement
In previous versions of the format, Statistics are stored for ColumnChunks in
ColumnMetaData and for individual pages inside DataPageHeader structs. When reading
pages, a reader had to process the page header to determine whether the page could be
skipped based on the statistics. This means the reader had to access all pages in a column,
thus likely reading most of the column data from disk.
Goals
1. Make both range scans and point lookups I/O efficient by allowing direct access to pages
based on their min and max values. In particular:
A single-row lookup in a row group based on the sort column of that row group will
only read one data page per the retrieved column.
Range scans on the sort column will only need to read the exact data pages that
contain relevant data.
Make other selective scans I/O efficient: if we have a very selective predicate on a
non-sorting column, for the other retrieved columns we should only need to access
data pages that contain matching rows.
2. No additional decoding effort for scans without selective predicates, e.g., full-row group
scans. If a reader determines that it does not need to read the index data, it does not
incur any overhead.
3. Index pages for sorted columns use minimal storage by storing only the boundary
elements between pages.
Non-Goals
Support for the equivalent of secondary indices, i.e., an index structure sorted on the key
values over non-sorted data.
Technical Approach
https://parquet.apache.org/docs/_print/ 50/75
25/10/24, 10:13 Documentation | Parquet
ColumnIndex: this allows navigation to the pages of a column based on column values
and is used to locate data pages that contain matching values for a scan predicate
OffsetIndex: this allows navigation by row index and is used to retrieve values for rows
identified as matches via the ColumnIndex. Once rows of a column are skipped, the
corresponding rows in the other columns have to be skipped. Hence the OffsetIndexes
for each column in a RowGroup are stored together.
The new index structures are stored separately from RowGroup, near the footer.
This is done so that a reader does not have to pay the I/O and deserialization cost for reading
them if it is not doing selective scans. The index structures' location and length are stored in
ColumnChunk.
Some observations:
We don’t need to record the lower bound for the first page and the upper bound for the
last page, because the row group Statistics can provide that. We still include those for the
sake of uniformity, and the overhead should be negligible.
We store lower and upper bounds for the values of each page. These may be the actual
minimum and maximum values found on a page, but can also be (more compact) values
that do not exist on a page. For example, instead of storing ““Blart Versenwald III”, a
writer may set min_values[i]="B" , max_values[i]="C" . This allows writers to truncate
large values and writers should use this to enforce some reasonable bound on the size
of the index structures.
Readers that support ColumnIndex should not also use page statistics. The only reason
to write page-level statistics when writing ColumnIndex structs is to support older
readers (not recommended).
For ordered columns, this allows a reader to find matching pages by performing a binary
search in min_values and max_values . For unordered columns, a reader can find matching
pages by sequentially reading min_values and max_values .
For range scans, this approach can be extended to return ranges of rows, page indices, and
page offsets to scan in each column. The reader can then initialize a scanner for each column
and fast forward them to the start row of the scan.
The min_values and max_values are calculated based on the column_orders field in the
FileMetaData struct of the footer.
https://parquet.apache.org/docs/_print/ 51/75
25/10/24, 10:13 Documentation | Parquet
Note: This is a work in progress and we would welcome help expanding its scope.
Legend
The value in each box means:
✅: supported
❌: not supported
(blank) no data
Implementations:
C++ : parquet-cpp
Java : parquet-java
Go : parquet-go
Rust : parquet-rs
Physical types
Data type C++ Java Go Rust
BOOLEAN
INT32
INT64
INT96 (1)
FLOAT
DOUBLE
BYTE_ARRAY
FIXED_LEN_BYTE_ARRAY
(1) This type is deprecated, but as of 2024 it’s common in currently produced parquet
files
Logical types
https://parquet.apache.org/docs/_print/ 52/75
25/10/24, 10:13 Documentation | Parquet
STRING
ENUM
UUID
DECIMAL (INT32)
DECIMAL (INT64)
DECIMAL (BYTE_ARRAY)
DECIMAL (FIXED_LEN_BYTE_ARRAY)
DATE
TIME (INT32)
TIME (INT64)
TIMESTAMP (INT64)
INTERVAL
JSON
BSON
LIST
MAP
FLOAT16
Encodings
Encoding C++ Java Go Rust
PLAIN
PLAIN_DICTIONARY
https://parquet.apache.org/docs/_print/ 53/75
25/10/24, 10:13 Documentation | Parquet
RLE_DICTIONARY
RLE
BIT_PACKED (deprecated)
DELTA_BINARY_PACKED
DELTA_LENGTH_BYTE_ARRAY
DELTA_BYTE_ARRAY
BYTE_STREAM_SPLIT
Compressions
Compression C++ Java Go Rust
UNCOMPRESSED
BROTLI
GZIP
LZ4 (deprecated)
LZ4_RAW
LZO
SNAPPY
ZSTD
https://parquet.apache.org/docs/_print/ 54/75
25/10/24, 10:13 Documentation | Parquet
Page index
Modular encryption
https://parquet.apache.org/docs/_print/ 55/75
25/10/24, 10:13 Documentation | Parquet
4 - Developer Guide
All developer resources related to Parquet.
4.1 - Sub-Projects
The parquet-format project contains format specifications and Thrift definitions of metadata
required to properly read Parquet files.
The parquet-java project contains multiple sub-modules, which implement the core
components of reading and writing a nested, column-oriented data stream, map this core
onto the parquet format, and provide Hadoop Input/Output Formats, Pig loaders, and other
Java-based utilities for interacting with Parquet.
The parquet-cpp project is a C++ library to read-write Parquet files. It is part of the Apache
Arrow C++ implementation, with bindings to Python, R, Ruby and C/GLib.
The parquet-compatibility project (deprecated) contains compatibility tests that can be used
to verify that implementations in different languages can read and write each other’s files. As
of January 2022 compatibility tests only exist up to version 1.2.0.
https://parquet.apache.org/docs/_print/ 56/75
25/10/24, 10:13 Documentation | Parquet
Building Java resources can be build using mvn package . The current stable version should
always be available from Maven Central.
https://parquet.apache.org/docs/_print/ 57/75
25/10/24, 10:13 Documentation | Parquet
Pull Requests
We prefer to receive contributions in the form of GitHub pull requests. Please send pull
requests against the github.com/apache/parquet-java repository. If you’ve previously forked
Parquet from its old location, you will need to add a remote or update your origin remote to
https://github.com/apache/parquet-java.git. Here are a few tips to get your contribution in:
1. Break your work into small, single-purpose patches if possible. It’s much harder to merge
in a large change with a lot of disjoint features.
2. Create an Issue on the Parquet-Java issues.
3. Submit the patch as a GitHub pull request against the master branch. For a tutorial, see
the GitHub guides on forking a repo and sending a pull request. Prefix your pull request
name with the Issue GH-2935 : (ex: https://github.com/apache/parquet-java/pull/2951).
4. Make sure that your code passes the unit tests. You can run the tests with mvn test in
the root directory.
5. Add new unit tests for your code.
6. All Pull Requests are tested automatically on GitHub Actions.
If you’d like to report a bug but don’t have time to fix it, you can still raise an issue, or email
the mailing list ([email protected]).
Committers
Merging a Pull Request
Merging a pull request requires being a committer on the project and approval of the PR by a
committer who is not the author.
A pull request can be merged through the GitHub UI. By default, only squash and merge is
enabled on the project.
When the PR solves an existing issue, ensure that it references the issue in the Pull-Request
template Closes #1234 . This way the issue is linked to the PR, and GitHub will automatically
close the relevant issue when the PR is being merged.
When a PR is raised that fixes a bug, or a feature that you want to target a certain version,
make sure to attach a milestone. This way other committers can track certain versions, and
see what is still pending. For information on the actual release, please check the release
page.
Maintenance branches
Once a PR has been merged to master, it can be that the commit needs to be backported to
maintenance branches, (ex: 1.14.x). The easiest way is to do this locally:
git remote add github-apache [email protected]:apache/parquet-java.git
get fetch --all
git checkout parquet-1.14.x
git reset --hard github-apache/parquet-1.14.x
git cherry-pick <hash-from-the-commit>
git push github-apache/parquet-1.14.x
Website
Release Documentation
To create documentation for a new release of parquet-format create a new .md file under
content/en/blog/parquet-format . Please see existing files in that directory as an example.
To create documentation for a new release of parquet-java create a new .md file under
content/en/blog/parquet-java . Please see existing files in that directory as an example.
Staging
To make a change to the staging version of the website:
https://parquet.apache.org/docs/_print/ 59/75
25/10/24, 10:13 Documentation | Parquet
2. Once the PR is merged, the Build and Deploy Parquet Site job in the deployment
workflow will be run, populating the asf-staging branch on this repo with the necessary
files.
Production
To make a change to the production version of the website:
https://parquet.apache.org/docs/_print/ 60/75
25/10/24, 10:13 Documentation | Parquet
Setup
You will need:
Make sure you have permission to deploy Parquet artifacts to Nexus by pushing a snapshot:
mvn deploy
Release process
Parquet uses the maven-release-plugin to tag a release and push binary artifacts to staging in
Nexus. Once maven completes the release, the official source tarball is built from the tag.
1. Verify that the release is finished (no planned Issues/PRs are pending on the milestone)
2. Build and test the project
3. Create a new branch for the release if this is a new minor version. For example, if the
new minor version is 1.14.0, create a new branch parquet-1.14.x
./dev/prepare-release.sh <version> <rc-number>
This runs maven’s release prepare with a consistent tag name. After this step, the release tag
will exist in the git repository.
If this step fails, you can roll back the changes by running these commands.
https://parquet.apache.org/docs/_print/ 61/75
25/10/24, 10:13 Documentation | Parquet
mvn release:perform
1. Go to Nexus.
2. In the menu on the left, choose “Staging Repositories”.
3. Select the Parquet repository.
4. At the top, click “Close” and follow the instructions. For the comment use “Apache
Parquet [Format] ”.
dev/source-release.sh <version> <rc-number>
This script builds the source tarball from the release tag’s SHA1, signs it, and uploads the
necessary files with SVN.
The last message from the script is the release commit’s SHA1 hash and URL for the VOTE e-
mail.
https://parquet.apache.org/docs/_print/ 62/75
25/10/24, 10:13 Documentation | Parquet
changes (whitespace, test-only changes) and sorting them to make them easier to digest.
Make sure to check the Set as pre-release checkbox as this is a release candidate.
Hi everyone,
This release includes important changes that I should have summarized here, but I'm lazy
https://parquet.apache.org/docs/_print/ 63/75
25/10/24, 10:13 Documentation | Parquet
./dev/finalize-release <release-version> <rc-num> <new-development-version-without-SNAPS
This will add the final release tag to the RC tag and sets the new development version in the
pom files. If everything is fine push the changes and the new tag to GitHub: git push --
follow-tags
1. Go to Nexus.
2. In the menu on the left, choose “Staging Repositories”.
3. Select the Parquet repository.
4. At the top, click Release and follow the instructions. For the comment use “Apache
Parquet [Format] ”.
svn mv https://dist.apache.org/repos/dist/dev/parquet/apache-parquet-<VERSION>-rcN/ http
4. Update parquet.apache.org
Update the downloads page on parquet.apache.org. Instructions for updating the site are on
the contribution page.
https://parquet.apache.org/docs/_print/ 65/75
25/10/24, 10:13 Documentation | Parquet
5 - Resources
Various resources to learn about the Parquet File Format.
https://parquet.apache.org/docs/_print/ 66/75
25/10/24, 10:13 Documentation | Parquet
5.2 - Presentations
Presentations with content about the Parquet File Format.
Slides
https://parquet.apache.org/docs/_print/ 67/75
25/10/24, 10:13 Documentation | Parquet
Slides
https://parquet.apache.org/docs/_print/ 68/75
25/10/24, 10:13 Documentation | Parquet
https://parquet.apache.org/docs/_print/ 69/75
25/10/24, 10:13 Documentation | Parquet
Slides
https://parquet.apache.org/docs/_print/ 70/75
25/10/24, 10:13 Documentation | Parquet
6.1 - License
License
https://parquet.apache.org/docs/_print/ 71/75
25/10/24, 10:13 Documentation | Parquet
6.2 - Security
Security
https://parquet.apache.org/docs/_print/ 72/75
25/10/24, 10:13 Documentation | Parquet
6.3 - Sponsor
Sponsor
https://parquet.apache.org/docs/_print/ 73/75
25/10/24, 10:13 Documentation | Parquet
6.4 - Donate
Donate
https://parquet.apache.org/docs/_print/ 74/75
25/10/24, 10:13 Documentation | Parquet
6.5 - Events
Events
https://parquet.apache.org/docs/_print/ 75/75