Skip to content

[Parquet]: GH-563: Make path_in_schema optional#9678

Open
etseidl wants to merge 4 commits intoapache:mainfrom
etseidl:deprecate_path_in_schema
Open

[Parquet]: GH-563: Make path_in_schema optional#9678
etseidl wants to merge 4 commits intoapache:mainfrom
etseidl:deprecate_path_in_schema

Conversation

@etseidl
Copy link
Copy Markdown
Contributor

@etseidl etseidl commented Apr 8, 2026

Which issue does this PR close?

none

Rationale for this change

This is a proof of concept implementation for apache/parquet-format#563

What changes are included in this PR?

Since version 57.0.0, this crate has been tolerant of a missing path_in_schema. This PR adds options to cease writing the field as well. The option defaults to continuing to write the field.

See related discussion on parquet mailing list: https://lists.apache.org/thread/czm2bk45wwtkhhpqxqvmx9dk5wkwk1kt

Are these changes tested?

Yes

Are there any user-facing changes?

No, this only adds an optional behavior change that defaults to no change

Related PRs

@alamb
Copy link
Copy Markdown
Contributor

alamb commented Apr 8, 2026

less bloat for the win!

@jhorstmann
Copy link
Copy Markdown
Contributor

A lot of other parquet implementations require this field, due to their generated thrift parser, even if they do not actually use the field for anything. I would be totally in favor of deprecating and skipping the field, but maybe a more compatible alternative would be to write the field as an empty list instead.

@etseidl
Copy link
Copy Markdown
Contributor Author

etseidl commented Apr 10, 2026

A lot of other parquet implementations require this field, due to their generated thrift parser, even if they do not actually use the field for anything. I would be totally in favor of deprecating and skipping the field, but maybe a more compatible alternative would be to write the field as an empty list instead.

Well, parquet-java actually uses the field to populate its version of ColumnDescriptor, so an empty list will be just as damaging to an old version. The whole idea is to change the field to optional in the thrift definition, and give the ecosystem a few years for that change to percolate. After some reasonable time has passed we can default to not writing the field. But in the meantime, users who have data sensitive to metadata bloat and know they have up-to-date tooling can help themselves earlier.

@alamb
Copy link
Copy Markdown
Contributor

alamb commented Apr 10, 2026

A lot of other parquet implementations require this field, due to their generated thrift parser, even if they do not actually use the field for anything. I would be totally in favor of deprecating and skipping the field, but maybe a more compatible alternative would be to write the field as an empty list instead.

Well, parquet-java actually uses the field to populate its version of ColumnDescriptor, so an empty list will be just as damaging to an old version. The whole idea is to change the field to optional in the thrift definition, and give the ecosystem a few years for that change to percolate. After some reasonable time has passed we can default to not writing the field. But in the meantime, users who have data sensitive to metadata bloat and know they have up-to-date tooling can help themselves earlier.

See also related mailing list discussion:

Copy link
Copy Markdown
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TLDR is I think this is a great idea. I also think it woudl be ok to merge this into arrow-rs even if there is not a broader consensus on the mailing list that we should do it in the format itself

My thinking is that some usecases are basically using parquet with the same read/writer and compatibility with older java based implementations is not important. This is the same thing for some of the newer encodings too

Letting users choose between "better/faster/stronger" and "more compatible" I think is very much a good idea

///
/// The `path_in_schema` field in the Thrift metadata is redundant and wastes a great
/// deal of space. Parquet file footers can be made much smaller by omitting this field.
/// Because the field was originally a mandatory field, this property defaults to `true`
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we choose to go this way I think it would help to give some more context here on what types of readers would be affected (basically all the parquet-java based readers prior to late 2026)

We could also perhaps provide a link to the discussion: https://lists.apache.org/thread/czm2bk45wwtkhhpqxqvmx9dk5wkwk1kt

Comment on lines +854 to +855
/// Should the writer should emit the `path_in_schema` element of the
/// `ColumnMetaData` Thrift struct.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is worth making it more apparently that this will cause incompatibilities with some older readers:

Suggested change
/// Should the writer should emit the `path_in_schema` element of the
/// `ColumnMetaData` Thrift struct.
/// Should the writer should emit the `path_in_schema` element of the
/// `ColumnMetaData` Thrift struct. WARNING: this will make the parquet
/// file unreadable by some older parquet readers. See LINK HERE for details

@alamb
Copy link
Copy Markdown
Contributor

alamb commented Apr 10, 2026

@alamb
Copy link
Copy Markdown
Contributor

alamb commented May 1, 2026

I wonder if we should just merge this into the Rust implementation of Parquet (with a caveat that this will make metadata smaller / more effiicent at the cost of incompatibility with older / java based readers)?

@etseidl
Copy link
Copy Markdown
Contributor Author

etseidl commented May 1, 2026

I'm game. I don't see a reason not to since it's opt in

@etseidl etseidl marked this pull request as ready for review May 1, 2026 13:46
Copy link
Copy Markdown
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @etseidl -- I will try and review this more carefully shortly. I need to work on FSB stuff for a while first

{
let buf = TrackedWrite::new(&mut buffer);
let writer = ParquetMetaDataWriter::new_with_tracked(buf, &metadata);
let writer = ParquetMetaDataWriter::new_with_tracked(buf, &metadata)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we perhaps make a new benchmark so we can evaluate the performance both in default mode and wihtout this part of the metadata?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's probably worthwhile. I had made this modification for the purposes of the parquet-format submission.

@etseidl etseidl force-pushed the deprecate_path_in_schema branch from ac323f0 to 1d4eb04 Compare May 1, 2026 18:29
@etseidl
Copy link
Copy Markdown
Contributor Author

etseidl commented May 1, 2026

run benchmark metadata

@adriangbot
Copy link
Copy Markdown

🤖 Arrow criterion benchmark running (GKE) | trigger
Instance: c4a-highcpu-48 (12 vCPU / 65 GiB) | Linux bench-c4361872261-1967-zrqkp 6.12.55+ #1 SMP Sun Feb 1 08:59:41 UTC 2026 aarch64 GNU/Linux

CPU Details (lscpu)
Architecture:                            aarch64
CPU op-mode(s):                          64-bit
Byte Order:                              Little Endian
CPU(s):                                  48
On-line CPU(s) list:                     0-47
Vendor ID:                               ARM
Model name:                              Neoverse-V2
Model:                                   1
Thread(s) per core:                      1
Core(s) per cluster:                     48
Socket(s):                               -
Cluster(s):                              1
Stepping:                                r0p1
BogoMIPS:                                2000.00
Flags:                                   fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh rng bti
L1d cache:                               3 MiB (48 instances)
L1i cache:                               3 MiB (48 instances)
L2 cache:                                96 MiB (48 instances)
L3 cache:                                80 MiB (1 instance)
NUMA node(s):                            1
NUMA node0 CPU(s):                       0-47
Vulnerability Gather data sampling:      Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit:             Not affected
Vulnerability L1tf:                      Not affected
Vulnerability Mds:                       Not affected
Vulnerability Meltdown:                  Not affected
Vulnerability Mmio stale data:           Not affected
Vulnerability Reg file data sampling:    Not affected
Vulnerability Retbleed:                  Not affected
Vulnerability Spec rstack overflow:      Not affected
Vulnerability Spec store bypass:         Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:                Mitigation; __user pointer sanitization
Vulnerability Spectre v2:                Mitigation; CSV2, BHB
Vulnerability Srbds:                     Not affected
Vulnerability Tsa:                       Not affected
Vulnerability Tsx async abort:           Not affected
Vulnerability Vmscape:                   Not affected

Comparing deprecate_path_in_schema (feaae15) to f725bc9 (merge-base) diff
BENCH_NAME=metadata
BENCH_COMMAND=cargo bench --features=arrow,async,test_common,experimental,object_store --bench metadata
BENCH_FILTER=
Results will be posted here when complete


File an issue against this benchmark runner

@adriangbot
Copy link
Copy Markdown

🤖 Arrow criterion benchmark completed (GKE) | trigger

Instance: c4a-highcpu-48 (12 vCPU / 65 GiB)

CPU Details (lscpu)
Architecture:                            aarch64
CPU op-mode(s):                          64-bit
Byte Order:                              Little Endian
CPU(s):                                  48
On-line CPU(s) list:                     0-47
Vendor ID:                               ARM
Model name:                              Neoverse-V2
Model:                                   1
Thread(s) per core:                      1
Core(s) per cluster:                     48
Socket(s):                               -
Cluster(s):                              1
Stepping:                                r0p1
BogoMIPS:                                2000.00
Flags:                                   fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh rng bti
L1d cache:                               3 MiB (48 instances)
L1i cache:                               3 MiB (48 instances)
L2 cache:                                96 MiB (48 instances)
L3 cache:                                80 MiB (1 instance)
NUMA node(s):                            1
NUMA node0 CPU(s):                       0-47
Vulnerability Gather data sampling:      Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit:             Not affected
Vulnerability L1tf:                      Not affected
Vulnerability Mds:                       Not affected
Vulnerability Meltdown:                  Not affected
Vulnerability Mmio stale data:           Not affected
Vulnerability Reg file data sampling:    Not affected
Vulnerability Retbleed:                  Not affected
Vulnerability Spec rstack overflow:      Not affected
Vulnerability Spec store bypass:         Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:                Mitigation; __user pointer sanitization
Vulnerability Spectre v2:                Mitigation; CSV2, BHB
Vulnerability Srbds:                     Not affected
Vulnerability Tsa:                       Not affected
Vulnerability Tsx async abort:           Not affected
Vulnerability Vmscape:                   Not affected
Details

group                                               deprecate_path_in_schema               main
-----                                               ------------------------               ----
decode metadata (wide) with schema                  1.01     26.8±0.40ms        ? ?/sec    1.00     26.5±0.42ms        ? ?/sec
decode metadata (wide) with skip PES                1.00     26.3±0.37ms        ? ?/sec    1.00     26.2±0.43ms        ? ?/sec
decode metadata (wide) with skip all stats          1.00     28.2±0.31ms        ? ?/sec    1.00     28.3±0.29ms        ? ?/sec
decode metadata (wide) with skip column stats       1.01     27.6±0.38ms        ? ?/sec    1.00     27.3±0.36ms        ? ?/sec
decode metadata (wide) with skip size stats         1.00     30.4±0.37ms        ? ?/sec    1.00     30.4±0.37ms        ? ?/sec
decode metadata (wide) with stats mask              1.01     26.4±0.38ms        ? ?/sec    1.00     26.3±0.32ms        ? ?/sec
decode metadata with schema                         1.00      4.1±0.01µs        ? ?/sec    1.01      4.1±0.01µs        ? ?/sec
decode metadata with skip PES                       1.00      6.9±0.05µs        ? ?/sec    1.01      7.0±0.04µs        ? ?/sec
decode metadata with skip column stats              1.00      6.7±0.02µs        ? ?/sec    1.00      6.8±0.03µs        ? ?/sec
decode metadata with stats mask                     1.00      6.9±0.02µs        ? ?/sec    1.01      7.0±0.04µs        ? ?/sec
decode parquet metadata                             1.00      7.1±0.01µs        ? ?/sec    1.02      7.3±0.04µs        ? ?/sec
decode parquet metadata (wide)                      1.00     28.8±0.37ms        ? ?/sec    1.00     28.7±0.42ms        ? ?/sec
decode parquet metadata no path_in_schema (wide)    1.00     28.1±0.41ms        ? ?/sec  
decode parquet metadata w/ size stats (wide)        1.00     35.7±0.43ms        ? ?/sec    1.00     35.6±0.47ms        ? ?/sec
open(default)                                       1.00      7.7±0.02µs        ? ?/sec    1.02      7.8±0.02µs        ? ?/sec
open(page index)                                    1.00    132.7±0.35µs        ? ?/sec    1.00    132.7±0.37µs        ? ?/sec

Resource Usage

base (merge-base)

Metric Value
Wall time 145.0s
Peak memory 4.3 GiB
Avg memory 4.2 GiB
CPU user 141.8s
CPU sys 0.8s
Peak spill 0 B

branch

Metric Value
Wall time 155.0s
Peak memory 4.3 GiB
Avg memory 4.2 GiB
CPU user 150.9s
CPU sys 0.2s
Peak spill 0 B

File an issue against this benchmark runner

Copy link
Copy Markdown
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this PR is ready except for:

  1. Don't change the existing writer benchmark (otherwise we are not benchmarking with default settings)
  2. Update the docs a bit more to explain the implications of not writing path_in_schema

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

parquet Changes to the parquet crate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants