Skip to content

Add post-mono MIR optimizations #131650

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

saethlin
Copy link
Member

@saethlin saethlin commented Oct 13, 2024

Before this PR, all MIR passes had to operate on polymorphic MIR. Thus any MIR transform maybe unable to determine the type of an argument or local (because it's still generic) or it may be unable to determine which function a Call terminator is calling (because it's still generic).

MIR transforms are a highly maintainable solution to a number of compiler problems, but this polymorphic limitation means that they are cannot solve some of our problems that we'd like them to; the most recent examples that come to mind are #134082 which has extra limitations because of the polymorphic inliner, and #139088 which is explicitly waiting for post-mono MIR passes to happen.

In addition, the lack of post-mono MIR optimizations means that MIR optimizations just miss out on profitable optimizations, which are so valuable that we've added kludges like #121421 (a MIR traversal that you better only run at mono-time).

In addition, rustc_codegen_ssa is riddled with on-the-fly monomorphization and optimization; the logic for these tricks that we do during codegen in my experience are hard to maintain, and I would much rather have those implemented in a MIR transform.

So this PR adds a new query codegen_mir (the MIR for codegen, not that I like the name). I've then replaced some of the kludges in rustc_codegen_ssa with PostMono variants of existing MIR transforms.

I've also un-querified check_mono_item and put it at the end of the post-mono pass list. Those checks should be post-mono passes too, but I've tried to keep this PR to a reviewable size. It's easy to imagine lots of other places to use post-mono MIR opts and I want the usefulness of this to be clear while the diff is also manageable.


This PR has a perf regression. I've hammered on the perf in a number of ways to get it down to what it is. incr-full builds suffer the most because they need to clone, intern, and cache a monomorphized copy of every MIR body. Things are mixed for every other build scenario. In almost all cases, binary sizes improve.

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels Oct 13, 2024
@rust-log-analyzer

This comment has been minimized.

@rustbot rustbot added the PG-exploit-mitigations Project group: Exploit mitigations label Oct 13, 2024
@rust-log-analyzer

This comment has been minimized.

@saethlin
Copy link
Member Author

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Oct 13, 2024
@bors
Copy link
Collaborator

bors commented Oct 13, 2024

⌛ Trying commit a211812 with merge b141564...

bors added a commit to rust-lang-ci/rust that referenced this pull request Oct 13, 2024
Add post-mono MIR passes to make mono-reachable analysis more accurate

r? ghost
@bors
Copy link
Collaborator

bors commented Oct 13, 2024

☀️ Try build successful - checks-actions
Build commit: b141564 (b1415647cdfcdd1b8dc5ed5f9a5aba87ade0b225)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (b141564): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
12.2% [0.2%, 93.7%] 163
Regressions ❌
(secondary)
6.9% [0.2%, 266.3%] 119
Improvements ✅
(primary)
-0.7% [-3.0%, -0.2%] 6
Improvements ✅
(secondary)
-11.1% [-33.8%, -0.2%] 12
All ❌✅ (primary) 11.7% [-3.0%, 93.7%] 169

Max RSS (memory usage)

Results (primary 14.5%, secondary 1.7%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
14.5% [0.7%, 56.9%] 108
Regressions ❌
(secondary)
4.5% [0.6%, 12.8%] 34
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-22.2% [-24.2%, -19.2%] 4
All ❌✅ (primary) 14.5% [0.7%, 56.9%] 108

Cycles

Results (primary 22.8%, secondary 13.8%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
23.0% [0.8%, 108.5%] 111
Regressions ❌
(secondary)
19.4% [1.0%, 223.4%] 42
Improvements ✅
(primary)
-3.0% [-3.0%, -3.0%] 1
Improvements ✅
(secondary)
-33.2% [-42.8%, -1.3%] 5
All ❌✅ (primary) 22.8% [-3.0%, 108.5%] 112

Binary size

Results (primary -0.3%, secondary -2.3%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.8% [0.0%, 2.3%] 7
Regressions ❌
(secondary)
0.1% [0.1%, 0.1%] 1
Improvements ✅
(primary)
-0.4% [-1.7%, -0.0%] 76
Improvements ✅
(secondary)
-2.4% [-25.8%, -0.0%] 65
All ❌✅ (primary) -0.3% [-1.7%, 2.3%] 83

Bootstrap: 781.427s -> 807.023s (3.28%)
Artifact size: 331.96 MiB -> 332.21 MiB (0.08%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Oct 13, 2024
@saethlin
Copy link
Member Author

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Oct 14, 2024
@bors
Copy link
Collaborator

bors commented Oct 14, 2024

⌛ Trying commit 6f6737a with merge 9233d9f...

bors added a commit to rust-lang-ci/rust that referenced this pull request Oct 14, 2024
Add post-mono MIR passes to make mono-reachable analysis more accurate

r? ghost
@bors
Copy link
Collaborator

bors commented Oct 14, 2024

☀️ Try build successful - checks-actions
Build commit: 9233d9f (9233d9f83ca672be3b2cfa697806fdb7c8970490)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (9233d9f): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
7.6% [0.1%, 59.9%] 151
Regressions ❌
(secondary)
2.9% [0.2%, 18.7%] 107
Improvements ✅
(primary)
-3.0% [-3.0%, -3.0%] 1
Improvements ✅
(secondary)
-6.6% [-64.0%, -0.3%] 11
All ❌✅ (primary) 7.5% [-3.0%, 59.9%] 152

Max RSS (memory usage)

Results (primary 11.3%, secondary 2.4%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
12.9% [1.3%, 52.1%] 93
Regressions ❌
(secondary)
3.6% [2.2%, 5.9%] 10
Improvements ✅
(primary)
-2.7% [-4.3%, -0.8%] 10
Improvements ✅
(secondary)
-3.4% [-3.5%, -3.4%] 2
All ❌✅ (primary) 11.3% [-4.3%, 52.1%] 103

Cycles

Results (primary 10.6%, secondary 3.2%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
10.7% [1.0%, 50.1%] 94
Regressions ❌
(secondary)
5.4% [1.7%, 18.4%] 37
Improvements ✅
(primary)
-3.1% [-3.1%, -3.1%] 1
Improvements ✅
(secondary)
-17.2% [-62.3%, -1.6%] 4
All ❌✅ (primary) 10.6% [-3.1%, 50.1%] 95

Binary size

Results (primary -0.1%, secondary -0.3%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.7% [0.0%, 2.4%] 9
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-0.2% [-0.8%, -0.0%] 69
Improvements ✅
(secondary)
-0.3% [-0.8%, -0.0%] 51
All ❌✅ (primary) -0.1% [-0.8%, 2.4%] 78

Bootstrap: 782.104s -> 806.252s (3.09%)
Artifact size: 332.57 MiB -> 332.81 MiB (0.07%)

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Oct 14, 2024
@saethlin
Copy link
Member Author

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Oct 24, 2024
bors added a commit to rust-lang-ci/rust that referenced this pull request Oct 24, 2024
Add post-mono MIR passes to make mono-reachable analysis more accurate

As of rust-lang#131650 (comment) I believe most of the incr overhead comes from re-computing, re-encoding, and loading a lot more MIR when all we're actually doing is traversing through it. I think that can be addressed by caching a query that looks up the mentioned/used items for an Instance.

I think the full-build regressions are pretty much just the expense of cloning, then monomorphizing, then caching the MIR.
@bors
Copy link
Collaborator

bors commented Oct 24, 2024

⌛ Trying commit 4ae3542 with merge 174810c...

@saethlin
Copy link
Member Author

saethlin commented May 1, 2025

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 1, 2025
bors added a commit to rust-lang-ci/rust that referenced this pull request May 1, 2025
Add post-mono MIR optimizations

Before this PR, all MIR passes had to operate on polymorphic MIR. Thus any MIR transform maybe unable to determine the type of an argument or local (because it's still generic) or it may be unable to determine which function a Call terminator is calling (because it's still generic).

MIR transforms are a highly maintainable solution to a number of compiler problems, but this polymorphic limitation means that they are cannot solve some of our problems that we'd like them to; the most recent examples that come to mind are rust-lang#134082 which has extra limitations because of the polymorphic inliner, and rust-lang#139088 which is explicitly waiting for post-mono MIR passes to happen.

In addition, the lack of post-mono MIR optimizations means that MIR optimizations just miss out on profitable optimizations, which are so valuable that we've added kludges like rust-lang#121421 (a MIR traversal that you better only run at mono-time).

In addition, rustc_codegen_ssa is riddled with on-the-fly monomorphization and optimization; the logic for these trick that we do in codegen in my experience are hard to maintain, and I would much rather have those implemented in a MIR transform.

So this PR adds a new query `codegen_mir` (the MIR for codegen, not that I like the name). I've then replaced _some_ of the kludges in rustc_codegen_ssa with `PostMono` variants of existing MIR transforms.

I've also un-querified `check_mono_item` and put it at the end of the post-mono pass list. Those checks should be post-mono passes too, but I've tried to keep this PR to a reviewable size. It's easy to imagine lots of other places to use post-mono MIR opts and I want the usefulness of this to be clear while the diff is also manageable.

---

This PR has a perf regression. I've hammered on the perf in a number of ways to get it down to what it is. incr-full builds suffer the most because they need to clone, intern, and cache a monomorphized copy of every MIR body. Things are mixed for every other build scenario. In almost all cases, binary sizes improve.
@bors
Copy link
Collaborator

bors commented May 1, 2025

⌛ Trying commit 7c9a410 with merge 55dc6112586ae887a7811648b0a8a05e1ac90162...

@bors
Copy link
Collaborator

bors commented May 1, 2025

☀️ Try build successful - checks-actions
Build commit: 55dc611 (55dc6112586ae887a7811648b0a8a05e1ac90162)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (55dc611): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
0.7% [0.1%, 2.0%] 81
Regressions ❌
(secondary)
1.2% [0.1%, 19.1%] 35
Improvements ✅
(primary)
-0.3% [-0.5%, -0.2%] 12
Improvements ✅
(secondary)
-0.7% [-2.9%, -0.2%] 7
All ❌✅ (primary) 0.6% [-0.5%, 2.0%] 93

Max RSS (memory usage)

Results (primary 5.3%, secondary 3.5%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
6.2% [0.4%, 21.2%] 64
Regressions ❌
(secondary)
3.5% [2.7%, 5.2%] 6
Improvements ✅
(primary)
-2.0% [-10.5%, -0.4%] 8
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 5.3% [-10.5%, 21.2%] 72

Cycles

Results (primary 1.2%, secondary 1.6%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.2% [0.4%, 3.5%] 69
Regressions ❌
(secondary)
4.3% [0.8%, 17.8%] 7
Improvements ✅
(primary)
-0.8% [-1.0%, -0.6%] 2
Improvements ✅
(secondary)
-3.1% [-4.0%, -1.8%] 4
All ❌✅ (primary) 1.2% [-1.0%, 3.5%] 71

Binary size

Results (primary -0.3%, secondary -0.3%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.4% [0.1%, 1.3%] 7
Regressions ❌
(secondary)
0.1% [0.1%, 0.1%] 1
Improvements ✅
(primary)
-0.3% [-1.0%, -0.0%] 91
Improvements ✅
(secondary)
-0.4% [-0.8%, -0.0%] 54
All ❌✅ (primary) -0.3% [-1.0%, 1.3%] 98

Bootstrap: 767.686s -> 779.637s (1.56%)
Artifact size: 365.62 MiB -> 365.12 MiB (-0.14%)

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 1, 2025
@saethlin saethlin removed the PG-exploit-mitigations Project group: Exploit mitigations label May 1, 2025
@saethlin
Copy link
Member Author

saethlin commented May 1, 2025

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 1, 2025
@bors
Copy link
Collaborator

bors commented May 1, 2025

⌛ Trying commit 88c82b1 with merge dded8b5...

bors added a commit to rust-lang-ci/rust that referenced this pull request May 1, 2025
Add post-mono MIR optimizations

Before this PR, all MIR passes had to operate on polymorphic MIR. Thus any MIR transform maybe unable to determine the type of an argument or local (because it's still generic) or it may be unable to determine which function a Call terminator is calling (because it's still generic).

MIR transforms are a highly maintainable solution to a number of compiler problems, but this polymorphic limitation means that they are cannot solve some of our problems that we'd like them to; the most recent examples that come to mind are rust-lang#134082 which has extra limitations because of the polymorphic inliner, and rust-lang#139088 which is explicitly waiting for post-mono MIR passes to happen.

In addition, the lack of post-mono MIR optimizations means that MIR optimizations just miss out on profitable optimizations, which are so valuable that we've added kludges like rust-lang#121421 (a MIR traversal that you better only run at mono-time).

In addition, rustc_codegen_ssa is riddled with on-the-fly monomorphization and optimization; the logic for these trick that we do in codegen in my experience are hard to maintain, and I would much rather have those implemented in a MIR transform.

So this PR adds a new query `codegen_mir` (the MIR for codegen, not that I like the name). I've then replaced _some_ of the kludges in rustc_codegen_ssa with `PostMono` variants of existing MIR transforms.

I've also un-querified `check_mono_item` and put it at the end of the post-mono pass list. Those checks should be post-mono passes too, but I've tried to keep this PR to a reviewable size. It's easy to imagine lots of other places to use post-mono MIR opts and I want the usefulness of this to be clear while the diff is also manageable.

---

This PR has a perf regression. I've hammered on the perf in a number of ways to get it down to what it is. incr-full builds suffer the most because they need to clone, intern, and cache a monomorphized copy of every MIR body. Things are mixed for every other build scenario. In almost all cases, binary sizes improve.
@bors
Copy link
Collaborator

bors commented May 2, 2025

☀️ Try build successful - checks-actions
Build commit: dded8b5 (dded8b513c1e02ad9c0ba757beae7259116852fb)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (dded8b5): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
0.7% [0.1%, 1.9%] 88
Regressions ❌
(secondary)
1.4% [0.1%, 19.2%] 25
Improvements ✅
(primary)
-0.4% [-0.6%, -0.2%] 12
Improvements ✅
(secondary)
-0.8% [-2.9%, -0.3%] 6
All ❌✅ (primary) 0.5% [-0.6%, 1.9%] 100

Max RSS (memory usage)

Results (primary 5.7%, secondary 1.9%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
6.9% [0.4%, 28.8%] 67
Regressions ❌
(secondary)
3.1% [2.2%, 3.9%] 6
Improvements ✅
(primary)
-1.5% [-10.7%, -0.4%] 11
Improvements ✅
(secondary)
-1.7% [-2.0%, -1.4%] 2
All ❌✅ (primary) 5.7% [-10.7%, 28.8%] 78

Cycles

Results (primary 1.2%, secondary 3.3%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.6% [0.4%, 3.4%] 49
Regressions ❌
(secondary)
3.9% [0.9%, 17.9%] 8
Improvements ✅
(primary)
-0.7% [-1.2%, -0.4%] 9
Improvements ✅
(secondary)
-1.9% [-1.9%, -1.9%] 1
All ❌✅ (primary) 1.2% [-1.2%, 3.4%] 58

Binary size

Results (primary -0.1%, secondary -0.1%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.2% [0.0%, 1.9%] 64
Regressions ❌
(secondary)
0.1% [0.0%, 0.4%] 40
Improvements ✅
(primary)
-0.4% [-1.0%, -0.0%] 82
Improvements ✅
(secondary)
-0.3% [-0.8%, -0.0%] 55
All ❌✅ (primary) -0.1% [-1.0%, 1.9%] 146

Bootstrap: 767.941s -> 778.442s (1.37%)
Artifact size: 365.55 MiB -> 365.05 MiB (-0.14%)

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 2, 2025
@saethlin
Copy link
Member Author

saethlin commented May 3, 2025

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 3, 2025
@bors
Copy link
Collaborator

bors commented May 3, 2025

⌛ Trying commit ddd6abb with merge 45b94dd98018f2609b43a8d8e08ec5432b9e0647...

bors added a commit to rust-lang-ci/rust that referenced this pull request May 3, 2025
Add post-mono MIR optimizations

Before this PR, all MIR passes had to operate on polymorphic MIR. Thus any MIR transform maybe unable to determine the type of an argument or local (because it's still generic) or it may be unable to determine which function a Call terminator is calling (because it's still generic).

MIR transforms are a highly maintainable solution to a number of compiler problems, but this polymorphic limitation means that they are cannot solve some of our problems that we'd like them to; the most recent examples that come to mind are rust-lang#134082 which has extra limitations because of the polymorphic inliner, and rust-lang#139088 which is explicitly waiting for post-mono MIR passes to happen.

In addition, the lack of post-mono MIR optimizations means that MIR optimizations just miss out on profitable optimizations, which are so valuable that we've added kludges like rust-lang#121421 (a MIR traversal that you better only run at mono-time).

In addition, rustc_codegen_ssa is riddled with on-the-fly monomorphization and optimization; the logic for these tricks that we do during codegen in my experience are hard to maintain, and I would much rather have those implemented in a MIR transform.

So this PR adds a new query `codegen_mir` (the MIR for codegen, not that I like the name). I've then replaced _some_ of the kludges in rustc_codegen_ssa with `PostMono` variants of existing MIR transforms.

I've also un-querified `check_mono_item` and put it at the end of the post-mono pass list. Those checks should be post-mono passes too, but I've tried to keep this PR to a reviewable size. It's easy to imagine lots of other places to use post-mono MIR opts and I want the usefulness of this to be clear while the diff is also manageable.

---

This PR has a perf regression. I've hammered on the perf in a number of ways to get it down to what it is. incr-full builds suffer the most because they need to clone, intern, and cache a monomorphized copy of every MIR body. Things are mixed for every other build scenario. In almost all cases, binary sizes improve.
@bors
Copy link
Collaborator

bors commented May 3, 2025

☀️ Try build successful - checks-actions
Build commit: 45b94dd (45b94dd98018f2609b43a8d8e08ec5432b9e0647)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (45b94dd): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
0.6% [0.2%, 2.8%] 73
Regressions ❌
(secondary)
1.6% [0.1%, 19.0%] 17
Improvements ✅
(primary)
-0.3% [-0.5%, -0.2%] 5
Improvements ✅
(secondary)
-0.5% [-0.5%, -0.5%] 1
All ❌✅ (primary) 0.5% [-0.5%, 2.8%] 78

Max RSS (memory usage)

Results (primary 4.6%, secondary 3.2%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
4.7% [0.4%, 27.8%] 111
Regressions ❌
(secondary)
3.2% [2.3%, 4.0%] 7
Improvements ✅
(primary)
-1.5% [-2.5%, -0.6%] 2
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 4.6% [-2.5%, 27.8%] 113

Cycles

Results (primary 0.6%, secondary 9.9%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.1% [0.4%, 2.7%] 44
Regressions ❌
(secondary)
9.9% [2.0%, 17.7%] 2
Improvements ✅
(primary)
-1.3% [-4.2%, -0.4%] 11
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 0.6% [-4.2%, 2.7%] 55

Binary size

Results (primary -0.1%, secondary 0.0%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.1% [0.0%, 1.3%] 62
Regressions ❌
(secondary)
0.1% [0.0%, 0.4%] 40
Improvements ✅
(primary)
-0.3% [-1.0%, -0.0%] 69
Improvements ✅
(secondary)
-0.1% [-0.4%, -0.0%] 53
All ❌✅ (primary) -0.1% [-1.0%, 1.3%] 131

Bootstrap: 769.328s -> 778.616s (1.21%)
Artifact size: 365.53 MiB -> 365.24 MiB (-0.08%)

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants