Skip to content

Structure query engine caches around DefId being a CrateNum & intra-crate DefIndex. #45275

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
eddyb opened this issue Oct 14, 2017 · 1 comment · Fixed by #119977
Closed

Structure query engine caches around DefId being a CrateNum & intra-crate DefIndex. #45275

eddyb opened this issue Oct 14, 2017 · 1 comment · Fixed by #119977
Labels
A-query-system Area: The rustc query system (https://rustc-dev-guide.rust-lang.org/query.html) C-enhancement Category: An issue proposing an enhancement or a PR with one. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.

Comments

@eddyb
Copy link
Member

eddyb commented Oct 14, 2017

Per-crate cache maps keyed on DefIndex may be more efficient and/or allow us to use Vec for queries keyed on DefId instead of HashMap.
We can use specialization (e.g. for splitting (DefId, T) into CrateNum and (DefIndex, T)), I think.

cc @nikomatsakis @michaelwoerister

@michaelwoerister
Copy link
Member

Sounds like a good idea if the lookup array ends up being densely populated enough. In general, we should provide a way to implement customized in-memory caching for specific queries.

@kennytm kennytm added C-enhancement Category: An issue proposing an enhancement or a PR with one. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels Oct 16, 2017
@Enselic Enselic added the A-query-system Area: The rustc query system (https://rustc-dev-guide.rust-lang.org/query.html) label Sep 26, 2023
bors added a commit to rust-lang-ci/rust that referenced this issue Jan 14, 2024
Cache DefId-keyed queries without hashing

Not yet ready for review:

* My guess is that this will be a significant memory footprint hit for sparser queries and require some more logic.
* Likely merits some further consideration for parallel rustc, though as noted in a separate comment the existing IndexVec sharding looks useless to me (likely always selecting the same shard today in 99% of cases).

cc rust-lang#45275

r? `@ghost`
bors added a commit to rust-lang-ci/rust that referenced this issue Jan 15, 2024
Cache DefId-keyed queries without hashing

Not yet ready for review:

* My guess is that this will be a significant memory footprint hit for sparser queries and require some more logic.
* Likely merits some further consideration for parallel rustc, though as noted in a separate comment the existing IndexVec sharding looks useless to me (likely always selecting the same shard today in 99% of cases).

Perf notes:

* rust-lang#119977 (comment) evaluated a `IndexVec<CrateNum, IndexVec<DefIndex, Option<(V, DepNodeIndex)>>` scheme. This showed poor performance on incremental scenarios as the `iter()` callbacks are slower when walking the sparse vecs. In `full` scenarios this was a win for many primary benchmarks (~1-6% instructions, ~1-10% cycles), but did show significant memory overhead (+50% on many benchmarks). Next attempt will (a) skip hashing for local storage (expected to be denser) and retains the hashing for foreign storage (expected to be sparse) and (b) keep a present Vec to speed up `iter()` callbacks.

cc rust-lang#45275

r? `@ghost`
@bors bors closed this as completed in 098d4fd Jan 17, 2024
lnicola pushed a commit to lnicola/rust-analyzer that referenced this issue Apr 7, 2024
Cache local DefId-keyed queries without hashing

This caches local DefId-keyed queries using just an IndexVec. This costs ~5% extra max-rss at most but brings significant runtime improvement, up to 13% cycle counts (mean: 4%) on primary benchmarks. It's possible that further tweaks could reduce the memory overhead further but this win seems worth landing despite the increased memory, particularly with regards to eliminating the present set in non-incr or storing it inline (skip list?) with the main data.

We tried applying this scheme to all keys in the [first perf run] but found that it carried a significant memory hit (50%). instructions/cycle counts were also much more mixed, though that may have been due to the lack of the present set optimization (needed for fast iter() calls in incremental scenarios).

Closes rust-lang/rust#45275

[first perf run]: https://perf.rust-lang.org/compare.html?start=30dfb9e046aeb878db04332c74de76e52fb7db10&end=6235575300d8e6e2cc6f449cb9048722ef43f9c7&stat=instructions:u
RalfJung pushed a commit to RalfJung/rust-analyzer that referenced this issue Apr 27, 2024
Cache local DefId-keyed queries without hashing

This caches local DefId-keyed queries using just an IndexVec. This costs ~5% extra max-rss at most but brings significant runtime improvement, up to 13% cycle counts (mean: 4%) on primary benchmarks. It's possible that further tweaks could reduce the memory overhead further but this win seems worth landing despite the increased memory, particularly with regards to eliminating the present set in non-incr or storing it inline (skip list?) with the main data.

We tried applying this scheme to all keys in the [first perf run] but found that it carried a significant memory hit (50%). instructions/cycle counts were also much more mixed, though that may have been due to the lack of the present set optimization (needed for fast iter() calls in incremental scenarios).

Closes rust-lang/rust#45275

[first perf run]: https://perf.rust-lang.org/compare.html?start=30dfb9e046aeb878db04332c74de76e52fb7db10&end=6235575300d8e6e2cc6f449cb9048722ef43f9c7&stat=instructions:u
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-query-system Area: The rustc query system (https://rustc-dev-guide.rust-lang.org/query.html) C-enhancement Category: An issue proposing an enhancement or a PR with one. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants