My experience using -DLLVM_BUILD_INSTRUMENTED_COVERAGE to generate coverage

I've started looking at the state of code coverage recently; we figured LLVM itself would be a good test to figure out how mature it is, so I gave it a shot. My experience:

1. You have to specify -DLLVM_USE_LINKER=gold (or maybe lld works; I didn't try). If you link with binutils ld, the program will generate broken profile information. Apparently, the linked binary is missing the __llvm_prf_names section. This took me half a day to figure out. This issue isn't documented anywhere, and the only error message I got was "Assertion `!Key.empty()' failed." from llvm-cov.

2. The generated binaries are big and slow. Comparing to a build without coverage, llc becomes 8x larger overall (text section becomes roughly 2x larger). And check-llvm-codegen-arm goes from 3 seconds to 250 seconds.

3. The generated profile information takes up a lot of space: llc generates a 90MB profraw file.

4. When prepare-code-coverage-artifact.py invokes llvm-profdata for the profiles generated by "make check", it takes 50GB of memory to process about 1.5GB of profiles. Is it supposed to use that much?

5. Using prepare-code-coverage-artifact.py generates "warning: 229 functions have mismatched data". I'm not sure what's causing this... I guess it has something to do with merging the profile data for multiple binaries? The error message is not very helpful.

5. The HTML output highlights the semicolon after a break or return statement in some switch statements in red. (For example, LowerADDC_ADDE_SUBC_SUBE in ARMISelLowering.cpp.) Not really important, but annoying.

6. On the bright side, when it works, the generated coverage information is precise and easy to read.

-Eli

Hi Eli,

Thanks for sharing your experience. I’d very much like to fix the problems you encountered.

I’ve started looking at the state of code coverage recently; we figured LLVM itself would be a good test to figure out how mature it is, so I gave it a shot.

You may already be aware of this, but for readers who are not, there is a public bot which produces coverage reports for llvm roughly twice a day. You can find it by visiting llvm.org and clicking on the “llvm-cov” link within the “Useful Links” box (in the “Dev. Resources” section). Coverage is gathered by running check-{llvm,clang,polly,lld} and the ‘nightly’ test suite.

My experience:

  1. You have to specify -DLLVM_USE_LINKER=gold (or maybe lld works; I didn’t try). If you link with binutils ld, the program will generate broken profile information. Apparently, the linked binary is missing the __llvm_prf_names section. This took me half a day to figure out. This issue isn’t documented anywhere, and the only error message I got was “Assertion `!Key.empty()’ failed.” from llvm-cov.

I expect llvm-cov to print out “Failed to load coverage: ” in this situation. There was some work done to tighten up error reporting in ProfileData and its clients in r270020. If your host toolchain does have these changes, please file a bug, and I’ll have it fixed.

I was not aware of the issue with the binutils linker. We do have some end-to-end, runtime tests in compiler-rt which use this linker, so this type of failure is surprising. I’ve CC’d David Li, who has some experience working with this linker, in case he has any insight about the issue.

If you are using a relatively up-to-date host toolchain, I’ll add a note to our docs suggesting that users use gold when compiling with coverage enabled.

  1. The generated binaries are big and slow. Comparing to a build without coverage, llc becomes 8x larger overall (text section becomes roughly 2x larger). And check-llvm-codegen-arm goes from 3 seconds to 250 seconds.

The binary size increase comes from coverage mapping data, counter increment instrumentation, and profiling metadata.

The coverage mapping section is highly compressible, but exploiting the compressibility has proven to be tricky. I filed: llvm.org/PR33499.

Coverage makes use of frontend-based instrumentation, which is much less efficient than the IR-based kind. If we can find a way to map counters inserted by IR PGO to AST nodes, we could improve the situation. I filed: llvm.org/PR33500.

We can reduce testing time by not instrumented basic tools like count, not, FileCheck etc. I filed: llvm.org/PR33501.

  1. The generated profile information takes up a lot of space: llc generates a 90MB profraw file.

I don’t have any ideas about how to fix this. You can decrease the space overhead for raw profiles by altering LLVM_PROFILE_MERGE_POOL_SIZE from 4 to a lower value.

  1. When prepare-code-coverage-artifact.py invokes llvm-profdata for the profiles generated by “make check”, it takes 50GB of memory to process about 1.5GB of profiles. Is it supposed to use that much?

By default, llvm-profdata uses hardware_concurrency() to determine the number of threads to use to merge profiles. You can change the default by passing -j/–num-threads to llvm-profdata. I’m open to changing the ‘prep’ script to use -j4 or something like that.

  1. Using prepare-code-coverage-artifact.py generates “warning: 229 functions have mismatched data”. I’m not sure what’s causing this… I guess it has something to do with merging the profile data for multiple binaries? The error message is not very helpful.

This is unexpected. I’ll try to reproduce this, and I’ll fix the diagnostic along the way. I filed: llvm.org/PR33502.

  1. The HTML output highlights the semicolon after a break or return statement in some switch statements in red. (For example, LowerADDC_ADDE_SUBC_SUBE in ARMISelLowering.cpp.) Not really important, but annoying.

I’m sure I’m sitting on a bug report about this already, but unfortunately haven’t had the time to get around to it.

  1. On the bright side, when it works, the generated coverage information is precise and easy to read.

Good to hear.

vedant

I've started looking at the state of code coverage recently; we figured
LLVM itself would be a good test to figure out how mature it is, so I gave
it a shot. My experience:

1. You have to specify -DLLVM_USE_LINKER=gold (or maybe lld works; I
didn't try). If you link with binutils ld, the program will generate
broken profile information. Apparently, the linked binary is missing the
__llvm_prf_names section. This took me half a day to figure out. This
issue isn't documented anywhere, and the only error message I got was
"Assertion `!Key.empty()' failed." from llvm-cov.

I believe the gnu-ld bug is
19161 – GNU ld wrongly garbage collects section referenced via __start_SECTIONNAME which is fixed in
version 2.26.

2. The generated binaries are big and slow. Comparing to a build without
coverage, llc becomes 8x larger overall (text section becomes roughly 2x
larger). And check-llvm-codegen-arm goes from 3 seconds to 250 seconds.

Over last couple of years, the instrumentation and coverage data overhead
has reduced greatly. FE based instrumentation in general has larger
overhead than IR based instrumentation, but the coverage testing currently
only works with FE instrumentation.

3. The generated profile information takes up a lot of space: llc
generates a 90MB profraw file.

This looks like in the normal range of raw profile size.

David

Host toolchain is trunk clang… but using system binutils (which is 2.24 on my Ubuntu 14.04 system… and apparently that’s too old per David Li’s response). Anyway, filed . If I’m cross-compiling for a target where the space matters, can I rid of the data for the copy on the device using “strip -R __llvm_covmap” or something like that, then use llvm-cov on the original? This would be nice… but I assume it’s hard. :slight_smile: Disk space is cheap, but the I/O takes a long time. I guess it’s specifically bad for LLVM’s “make check”, maybe not so bad for other cases. Oh, so it’s using a couple gigabytes per thread multiplied by 24 cores? Okay, now I’m not so worried. :slight_smile:

Host toolchain is trunk clang… but using system binutils (which is 2.24 on my Ubuntu 14.04 system… and apparently that’s too old per David Li’s response). Anyway, filed .

I’ve updated the clang docs re: ‘Source based code coverage’ to reflect this issue. I’ve also tightened up our error reporting a bit so we fail earlier with something better than an assertion message (r305765, r305767).

  1. The generated binaries are big and slow. Comparing to a build without coverage, llc becomes 8x larger overall (text section becomes roughly 2x larger). And check-llvm-codegen-arm goes from 3 seconds to 250 seconds.

The binary size increase comes from coverage mapping data, counter increment instrumentation, and profiling metadata.

The coverage mapping section is highly compressible, but exploiting the compressibility has proven to be tricky. I filed: llvm.org/PR33499.

If I’m cross-compiling for a target where the space matters, can I rid of the data for the copy on the device using “strip -R __llvm_covmap” or something like that, then use llvm-cov on the original?

I haven’t tried this but I expect it to work. Instrumented programs don’t reference the __llvm_covmap section.

Coverage makes use of frontend-based instrumentation, which is much less efficient than the IR-based kind. If we can find a way to map counters inserted by IR PGO to AST nodes, we could improve the situation. I filed: llvm.org/PR33500.

This would be nice… but I assume it’s hard. :slight_smile:

It seems like it is. At a high level, you’d need some way to associate the counters placed by IR PGO instrumentation to the counters that clang expects to see while walking an AST. I don’t have a concrete design for this in mind.

We can reduce testing time by not instrumented basic tools like count, not, FileCheck etc. I filed: llvm.org/PR33501.

  1. The generated profile information takes up a lot of space: llc generates a 90MB profraw file.

I don’t have any ideas about how to fix this. You can decrease the space overhead for raw profiles by altering LLVM_PROFILE_MERGE_POOL_SIZE from 4 to a lower value.

Disk space is cheap, but the I/O takes a long time. I guess it’s specifically bad for LLVM’s “make check”, maybe not so bad for other cases.

You can speed up “make check” a bit by using non-instrumented versions of count, not, FileCheck, etc.

vedant

Host toolchain is trunk clang… but using system binutils (which is 2.24 on my Ubuntu 14.04 system… and apparently that’s too old per David Li’s response). Anyway, filed .

I’ve updated the clang docs re: ‘Source based code coverage’ to reflect this issue. I’ve also tightened up our error reporting a bit so we fail earlier with something better than an assertion message (r305765, r305767).

  1. The generated binaries are big and slow. Comparing to a build without coverage, llc becomes 8x larger overall (text section becomes roughly 2x larger). And check-llvm-codegen-arm goes from 3 seconds to 250 seconds.

The binary size increase comes from coverage mapping data, counter increment instrumentation, and profiling metadata.

The coverage mapping section is highly compressible, but exploiting the compressibility has proven to be tricky. I filed: llvm.org/PR33499.

If I’m cross-compiling for a target where the space matters, can I rid of the data for the copy on the device using “strip -R __llvm_covmap” or something like that, then use llvm-cov on the original?

I haven’t tried this but I expect it to work. Instrumented programs don’t reference the __llvm_covmap section.

Coverage makes use of frontend-based instrumentation, which is much less efficient than the IR-based kind. If we can find a way to map counters inserted by IR PGO to AST nodes, we could improve the situation. I filed: llvm.org/PR33500.

This would be nice… but I assume it’s hard. :slight_smile:

It seems like it is. At a high level, you’d need some way to associate the counters placed by IR PGO instrumentation to the counters that clang expects to see while walking an AST. I don’t have a concrete design for this in mind.

We can reduce testing time by not instrumented basic tools like count, not, FileCheck etc. I filed: llvm.org/PR33501.

  1. The generated profile information takes up a lot of space: llc generates a 90MB profraw file.

I don’t have any ideas about how to fix this. You can decrease the space overhead for raw profiles by altering LLVM_PROFILE_MERGE_POOL_SIZE from 4 to a lower value.

Disk space is cheap, but the I/O takes a long time. I guess it’s specifically bad for LLVM’s “make check”, maybe not so bad for other cases.

You can speed up “make check” a bit by using non-instrumented versions of count, not, FileCheck, etc.

Ah, sorry for mentioning this twice.

On another note, I’m looking into the “N mismatched functions” warnings issue, and suspect that it happens when there are conflicting definitions of the same function in different binaries. The issue doesn’t seem to occur when using profiles from just one binary to generate a report for that binary. I’ll dig into this a bit more and update PR33502.

vedant

My experience:

1. You have to specify -DLLVM_USE_LINKER=gold (or maybe lld works; I
didn't try). If you link with binutils ld, the program will generate
broken profile information. Apparently, the linked binary is missing the
__llvm_prf_names section. This took me half a day to figure out. This
issue isn't documented anywhere, and the only error message I got was
"Assertion `!Key.empty()' failed." from llvm-cov.

I expect llvm-cov to print out "Failed to load coverage: <reason>" in this
situation. There was some work done to tighten up error reporting in
ProfileData and its clients in r270020. If your host toolchain does have
these changes, please file a bug, and I'll have it fixed.

Host toolchain is trunk clang... but using system binutils (which is 2.24
on my Ubuntu 14.04 system... and apparently that's too old per David Li's
response). Anyway, filed 33517 – Crash using llvm-cov with missing function names in profile data .

I've updated the clang docs re: 'Source based code coverage' to reflect
this issue. I've also tightened up our error reporting a bit so we fail
earlier with something better than an assertion message (r305765,
r305767).

2. The generated binaries are big and slow. Comparing to a build without
coverage, llc becomes 8x larger overall (text section becomes roughly 2x
larger). And check-llvm-codegen-arm goes from 3 seconds to 250 seconds.

The binary size increase comes from coverage mapping data, counter
increment instrumentation, and profiling metadata.

The coverage mapping section is highly compressible, but exploiting the
compressibility has proven to be tricky. I filed: llvm.org/PR33499.

If I'm cross-compiling for a target where the space matters, can I rid of
the data for the copy on the device using "strip -R __llvm_covmap" or
something like that, then use llvm-cov on the original?

I haven't tried this but I expect it to work. Instrumented programs don't
reference the __llvm_covmap section.

Right. The user can also use objcopy -only-section=__llvm_covmap <in> <out>
to copy the covmap section into a smaller file, and feed that later to the
coverage tool.

David

I tried looking into this a bit more. It looks like the profile data file generated by llc contains approximately 5MB of counters (__llvm_prf_cnts), 10MB of “data” (__llvm_prf_data), and 70MB of __llvm_prf_names. __llvm_prf_data and __llvm_prf_names contain which can be read from the original binary, as far as I can tell. The 80MB of data wouldn’t be a big deal if it were just sitting on disk… but we also erase the whole file and rewrite it from scratch after we merge profile counters. Can we do better here? -Eli

We can reduce testing time by *not* instrumented basic tools like count,
not, FileCheck etc. I filed: llvm.org/PR33501.

3. The generated profile information takes up a lot of space: llc
generates a 90MB profraw file.

I don't have any ideas about how to fix this. You can decrease the space
overhead for raw profiles by altering LLVM_PROFILE_MERGE_POOL_SIZE from 4
to a lower value.

Disk space is cheap, but the I/O takes a long time. I guess it's
specifically bad for LLVM's "make check", maybe not so bad for other cases.

You can speed up "make check" a bit by using non-instrumented versions of
count, not, FileCheck, etc.

I tried looking into this a bit more. It looks like the profile data file
generated by llc contains approximately 5MB of counters (__llvm_prf_cnts),
10MB of "data" (__llvm_prf_data), and 70MB of __llvm_prf_names.
__llvm_prf_data and __llvm_prf_names contain which can be read from the
original binary, as far as I can tell. The 80MB of data wouldn't be a big
deal if it were just sitting on disk... but we also erase the whole file
and rewrite it from scratch after we merge profile counters.

Can we do better here?

yes, something can be done there. I will look into it.

David

We can reduce testing time by *not* instrumented basic tools like count,
not, FileCheck etc. I filed: llvm.org/PR33501.

3. The generated profile information takes up a lot of space: llc
generates a 90MB profraw file.

I don't have any ideas about how to fix this. You can decrease the space
overhead for raw profiles by altering LLVM_PROFILE_MERGE_POOL_SIZE from 4
to a lower value.

Disk space is cheap, but the I/O takes a long time. I guess it's
specifically bad for LLVM's "make check", maybe not so bad for other cases.

You can speed up "make check" a bit by using non-instrumented versions of
count, not, FileCheck, etc.

I tried looking into this a bit more. It looks like the profile data file
generated by llc contains approximately 5MB of counters (__llvm_prf_cnts),
10MB of "data" (__llvm_prf_data), and 70MB of __llvm_prf_names.
__llvm_prf_data and __llvm_prf_names contain which can be read from the
original binary, as far as I can tell. The 80MB of data wouldn't be a big
deal if it were just sitting on disk... but we also erase the whole file
and rewrite it from scratch after we merge profile counters.

Can you check if name compression is turned on in your build?

David

Can we do better here?

I think it is. At least, I didn’t intentionally turn it off, and examining the file with objdump I don’t see any uncompressed strings. Not sure if there’s any easy way to confirm that.

We can reduce testing time by *not* instrumented basic tools like count,
not, FileCheck etc. I filed: llvm.org/PR33501.

3. The generated profile information takes up a lot of space: llc
generates a 90MB profraw file.

I don't have any ideas about how to fix this. You can decrease the space
overhead for raw profiles by altering LLVM_PROFILE_MERGE_POOL_SIZE from
4 to a lower value.

Disk space is cheap, but the I/O takes a long time. I guess it's
specifically bad for LLVM's "make check", maybe not so bad for other cases.

You can speed up "make check" a bit by using non-instrumented versions of
count, not, FileCheck, etc.

I tried looking into this a bit more. It looks like the profile data
file generated by llc contains approximately 5MB of counters
(__llvm_prf_cnts), 10MB of "data" (__llvm_prf_data), and 70MB of
__llvm_prf_names. __llvm_prf_data and __llvm_prf_names contain which can
be read from the original binary, as far as I can tell. The 80MB of data
wouldn't be a big deal if it were just sitting on disk... but we also erase
the whole file and rewrite it from scratch after we merge profile counters.

Can you check if name compression is turned on in your build?

David

I think it is. At least, I didn't intentionally turn it off, and examining
the file with objdump I don't see any uncompressed strings. Not sure if
there's any easy way to confirm that.

Just a little surprised at the size of __llvm_prf_names section. The llc I
built with IR PGO has a __llvm_prf_names section with size ~1.4MB. I
expect FE instrumentation to produce larger name section size, but not so
much bigger.

David

I had an old build of llc with FE instrumentation, the name section size is about 5MB. Using coverage is likely to cause the name section to be larger as there are more references to dead/unused function names.

What do you see when

readelf --string-dump=__llvm_prf_names llc

David

I get a bunch of unreadable binary. Output piped to "less":

String dump of section '__llvm_prf_names':
   [ 2] #<CA>^Ex<DA><D4>is<E3>(^Z<EF>^O<DD>^P<A9><D5><F7><9B><CB><FB><A8>d<97>^U<96>O<9F><89><FB><85>AQ<B4>M<B5><B6>&)Wy~<FD>C^B\<B0>/$<A5><EA>s<E3><CE>L<97>E$^R<89>Dn<C8>L<84><FF>
<EF><C7>h<B7><FB><DC>ߊ,=<BC><BF>$ow~<B0>\<C4><FF>_X<FE><E0><C7><D1>&<C9>c<F4><E7>^_<AB><B0><F9>,`<BE>H^Oi1G<BF>{<E3><D7>({O<8A><E7>S<91>^^^O<B9>7<BB><CD><F7><F3>C^d<E7>}r("<F8>c^P P<83>
<D0><F3>`T^Z<ED><D2><FF>M<B2><F9>k^X^D/<8B><D5>8<A4><E1>N><A3><DD>9<C9><E7><DF><F1><F7>c^B38<9C><F7>^<BF><C2>ߕ^_<D6><F0>_<F2><BB>]<94><E7><C1><FD><E9><95>^A3<<9E>^\<B0>{\^O0<CC><C9>)<CA>r<84><D9>j^H<B3><DC><F9><F3><EF><B7><FE> <8C><E1>'L^Q<C9>"<F0><A7>"><E8><FF><DD>^^V^R<A4><D6><FC>d<EB>j*oHO<D5>^F<C2>p<D0>^U<82><EF>^Y!<90><9D>O5;:^TgL<F9>^Y<D3>z<D5>#=<81><E1><C3>6<C4>^\u%<85>7<EE>^Jaf^D0C<8B><8C>r^D^E<9D><B5>^W^L<A3>dx<FA><A3>1<FE><88>l^OO<AB>^Z<80>^\~^I<EE><DE>^O>]V~c<C9>^D<B7><88>[4|0^R<89><B5>ʳ<96><D7>Ӽ<E7><F9>^D<EB>^<BF><A5><9B>Mr^Ht<CC>A^PP<EF>
<8D>n<BA><89>q<91><DE>

-Eli

With llc, the size of the names section can vary widely depending on the value of -DLLVM_TARGETS_TO_BUILD.

Enabling coverage shouldn’t increase the name section size much. I only see one place where this happens, and it’s relatively cold:
http://lab.llvm.org:8080/coverage/coverage-reports/llvm/coverage/Users/buildslave/jenkins/sharedspace/clang-stage2-coverage-R@2/llvm/lib/Transforms/Instrumentation/InstrProfiling.cpp.html#L512

This looks compressed to me.

vedant

With llc, the size of the names section can vary widely depending on the
value of -DLLVM_TARGETS_TO_BUILD.

Sure.

Enabling coverage shouldn't increase the name section size much. I only
see one place where this happens, and it's relatively cold:
http://lab.llvm.org:8080/coverage/coverage-reports/llvm/coverage/Users/
buildslave/jenkins/sharedspace/clang-stage2-coverage-R@2/llvm/lib/
Transforms/Instrumentation/InstrProfiling.cpp.html#L512

What is the data set used to create the coverage data there? Looks like
the lower Coverage function is called 6 times (aka only 6 modules have
coverage data) and on average, each coverage data only references ~2
names. This does not seem typical.

I get a bunch of unreadable binary. Output piped to "less":

String dump of section '__llvm_prf_names':
  [ 2] #<CA>^Ex<DA><D4>is<E3>(^Z<EF>^O<DD>^P<A9><D5><F7><9B><CB><
><A8>d<97>^U<96>O<9F><89><FB><85>AQ<B4>M<B5><B6>&)Wy~<
>C^B\<B0>/$<A5><EA>s<E3><CE>L<97>E$^R<89>Dn<C8>L<84><FF>
<EF><C7>h<B7><FB><DC>ߊ,=<BC><BF>$ow~<B0>\<C4><FF>_X<FE><E0>
<C7><D1>&<C9>c<F4><E7>^_<AB><B0><F9>,`<BE>H^Oi1G<BF>{<E3><
D7>({O<8A><E7>S<91>^^^O<B9>7<BB><CD><F7><F3>C^d<E7>}r("<F8>c^P P<83>
<D0><F3>`T^Z<ED><D2><FF>M<B2><F9>k^X^D/<8B><D5>8<A4><E1>N><
A3><DD>9<C9><E7><DF><F1><F7>c^B38<9C><F7>^<BF><C2>ߕ^_<D6><
F0>_<F2><BB>]<94><E7><C1><FD><E9><95>^A3<<9E>^\<B0>{\^O0<CC>
<C9>)<CA>r<84><D9>j^H<B3><DC><F9><F3><EF><B7><FE>
<8C><E1>'L^Q<C9>"<F0><A7>"><E8><FF><DD>^^V^R<A4><D6><FC>d<
>j*oHO<D5>^F<C2>p<D0>^U<82><EF>^Y!<90><9D>O5;:^TgL<F9>^Y<
D3>z<D5>#=<81><E1><C3>6<C4>^\u%<85>7<EE>^Jaf^D0C<8B><8C>r^
D^E<9D><B5>^W^L<A3>dx<FA><A3>1<FE><88>l^OO<AB>^Z<80>^\~^I<
><DE>^O>]V~c<C9>^D<B7><88>[4|0^R<89><B5>ʳ<96><D7>Ӽ<E7><
F9>^D<EB>^<BF><A5><9B>Mr^Ht<CC>A^PP<EF>
<8D>n<BA><89>q<91><DE>

This looks compressed to me.

Yes.

David

It’s check-llvm, check-clang, plus the test suite. lowerCoverageData doesn’t get called unless we need to emit empty counter mappings, for things like functions with empty bodies.

vedant

I’m using the default set of targets for now (DLLVM_TARGETS_TO_BUILD not specified). Are you sure it’s actually rare in practice? I can get roughly 20KB of data in the name section of an object file just by including llvm/IR/Module.h (without any other code in the file). I’ll experiment a bit more. -Eli

Hm, I looked at CoverageMappingGen.cpp.o and found that enabling coverage increases the size of the profile names section from 21K to 254K. It’s more common than I expected.

vedant