Vectorization: Next Steps

As some of you may know, I committed my basic-block autovectorization
pass a few days ago. I encourage anyone interested to try it out (pass
-vectorize to opt or -mllvm -vectorize to clang) and provide feedback.
Especially in combination with -unroll-allow-partial, I have observed
some significant benchmark speedups, but, I have also observed some
significant slowdowns. I would like to share my thoughts, and hopefully
get feedback, on next steps.

1. "Target Data" for vectorization - I think that in order to improve
the vectorization quality, the vectorizer will need more information
about the target. This information could be provided in the form of a
kind of extended target data. This extended target data might contain:
- What basic types can be vectorized, and how many of them will fit
into (the largest) vector registers
- What classes of operations can be vectorized (division, conversions /
sign extension, etc. are not always supported)
- What alignment is necessary for loads and stores
- Is scalar-to-vector free?

2. Feedback between passes - We may to implement a closer coupling
between optimization passes than currently exists. Specifically, I have
in mind two things:
- The vectorizer should communicate more closely with the loop
unroller. First, the loop unroller should try to unroll to preserve
maximal load/store alignments. Second, I think it would make a lot of
sense to be able to unroll and, only if this helps vectorization should
the unrolled version be kept in preference to the original. With basic
block vectorization, it is often necessary to (partially) unroll in
order to vectorize. Even when we also have real loop vectorization,
however, I still think that it will be important for the loop unroller
to communicate with the vectorizer.
- After vectorization, it would make sense for the vectorization pass
to request further simplification, but only on those parts of the code
that it modified.

3. Loop vectorization - It would be nice to have, in addition to
basic-block vectorization, a more-traditional loop vectorization pass. I
think that we'll need a better loop analysis pass in order for this to
happen. Some of this was started in LoopDependenceAnalysis, but that
pass is not yet finished. We'll need something like this to recognize
affine memory references, etc.

I look forward to hearing everyone's thoughts.

-Hal

Hi Hal,

As some of you may know, I committed my basic-block autovectorization
pass a few days ago. I encourage anyone interested to try it out (pass
-vectorize to opt or -mllvm -vectorize to clang) and provide feedback.
Especially in combination with -unroll-allow-partial, I have observed
some significant benchmark speedups, but, I have also observed some
significant slowdowns.

codegen for vector constructs is not always that great in my experience.
It could be that your vectorizer is doing the right thing, and it's
codegen that needs to be improved. For example when I use the GCC
autovectorizer I often see LLVM codegen unnecessarily scalarizing the
vector code. Did you try to analyse these slowdowns?

Ciao, Duncan.

  I would like to share my thoughts, and hopefully

Duncan,

I also noticed cases where vector IR is scalariezd by the codegen. From what I have seen (which is based on a different vectorizer with a different code model, etc) there are two main areas for improvements:

1. Complex instructions - Instructions such as shuffles are very sensitive to the ability of the codegen to lower them. If a vectorizer generates shuffle instructions which are not handled properly by the manual lowering code, then the instruction is scalarized.

2. Instructions with mixed types -Instructions which operate on mixed types, such as 2xfloat->2xdouble, are usually scalarized by the type legalizer.

Nadav

Hi Nadav,

I also noticed cases where vector IR is scalariezd by the codegen. From what I have seen (which is based on a different vectorizer with a different code model, etc) there are two main areas for improvements:

1. Complex instructions - Instructions such as shuffles are very sensitive to the ability of the codegen to lower them. If a vectorizer generates shuffle instructions which are not handled properly by the manual lowering code, then the instruction is scalarized.

2. Instructions with mixed types -Instructions which operate on mixed types, such as 2xfloat->2xdouble, are usually scalarized by the type legalizer.

yes, I was particularly thinking of the second type. I think we should add some
debugging hooks to codegen so that we can easily see when this kind of thing is
happening. I suspect that IR produced by the GCC vectorizer should never be
scalarized, because GCC only produces vector operations that are natively
supported (I could be wrong about this). If so, any scalarization done by LLVM
on such IR represents a codegen deficiency.

Ciao, Duncan.

Hi Hal,

> As some of you may know, I committed my basic-block autovectorization
> pass a few days ago. I encourage anyone interested to try it out (pass
> -vectorize to opt or -mllvm -vectorize to clang) and provide feedback.
> Especially in combination with -unroll-allow-partial, I have observed
> some significant benchmark speedups, but, I have also observed some
> significant slowdowns.

codegen for vector constructs is not always that great in my experience.
It could be that your vectorizer is doing the right thing, and it's
codegen that needs to be improved. For example when I use the GCC
autovectorizer I often see LLVM codegen unnecessarily scalarizing the
vector code. Did you try to analyse these slowdowns?

Ciao, Duncan.

Duncan,
  Is there a recommended approach for testing the new -vectorize support
within dragonegg?
               Jack

Hi Hal,

> As some of you may know, I committed my basic-block autovectorization
> pass a few days ago. I encourage anyone interested to try it out (pass
> -vectorize to opt or -mllvm -vectorize to clang) and provide feedback.
> Especially in combination with -unroll-allow-partial, I have observed
> some significant benchmark speedups, but, I have also observed some
> significant slowdowns.

codegen for vector constructs is not always that great in my experience.
It could be that your vectorizer is doing the right thing, and it's
codegen that needs to be improved. For example when I use the GCC
autovectorizer I often see LLVM codegen unnecessarily scalarizing the
vector code. Did you try to analyse these slowdowns?

There are a lot of them and I've only looked at a small fraction as of
yet. I have seen things that look like codegen deficiencies, but I've
not confirmed this in detail. One important case that I have noticed is
where the pass will vectorize sign-extended conversions, or int/float
conversions, etc. which end up being expensive to scalarize. There are
also cases where it vectorizes small integer operations which just get
scalarized by the codegen. I need to spend some more time looking at
this.

-Hal

Hi Jack,

As some of you may know, I committed my basic-block autovectorization
pass a few days ago. I encourage anyone interested to try it out (pass
-vectorize to opt or -mllvm -vectorize to clang) and provide feedback.
Especially in combination with -unroll-allow-partial, I have observed
some significant benchmark speedups, but, I have also observed some
significant slowdowns. I would like to share my thoughts, and hopefully
get feedback, on next steps.

Hi Hal,

I haven't had a chance to look at your pass in detail, but here are some opinions: :slight_smile:

1. "Target Data" for vectorization - I think that in order to improve
the vectorization quality, the vectorizer will need more information
about the target. This information could be provided in the form of a
kind of extended target data. This extended target data might contain:
- What basic types can be vectorized, and how many of them will fit
into (the largest) vector registers
- What classes of operations can be vectorized (division, conversions /
sign extension, etc. are not always supported)
- What alignment is necessary for loads and stores
- Is scalar-to-vector free?

I think that this will be a really important API, but I strongly advocate that you model this after TargetLoweringInfo instead of TargetData. First, TargetData isn't actually a target API (it should be fixed, I filed PR11936 to track this). Second, targets will have to implement imperative code to return precise answers to questions. For example, you'll want something like "what is the cost of a shuffle with this mask" which will be extremely target specific, will depend on what CPU subfeatures are enabled, etc.

When you start working on this, I strongly encourage you to propose the API you want here. Start small and add features as you go.

2. Feedback between passes - We may to implement a closer coupling
between optimization passes than currently exists. Specifically, I have
in mind two things:
- The vectorizer should communicate more closely with the loop
unroller. First, the loop unroller should try to unroll to preserve
maximal load/store alignments. Second, I think it would make a lot of
sense to be able to unroll and, only if this helps vectorization should
the unrolled version be kept in preference to the original. With basic
block vectorization, it is often necessary to (partially) unroll in
order to vectorize. Even when we also have real loop vectorization,
however, I still think that it will be important for the loop unroller
to communicate with the vectorizer.

I really disagree with this, see below.

3. Loop vectorization - It would be nice to have, in addition to
basic-block vectorization, a more-traditional loop vectorization pass. I
think that we'll need a better loop analysis pass in order for this to
happen. Some of this was started in LoopDependenceAnalysis, but that
pass is not yet finished. We'll need something like this to recognize
affine memory references, etc.

I think that a loop vectorizor and a basic block vectorizer both make perfect sense and are important for different classes of code. However, I don't think that we should go down the path of trying to use a "basic block vectorizor + loop unrolling" serve the purpose of a loop vectorizer. Trying to make a BBVectorizer and a loop unroller play together will be really fragile, because they'll both have to duplicate the same metrics (otherwise, for example, you'd unroll a loop that isn't vectorizable). This will also be a huge hit to compile time.

-Chris

> As some of you may know, I committed my basic-block autovectorization
> pass a few days ago. I encourage anyone interested to try it out (pass
> -vectorize to opt or -mllvm -vectorize to clang) and provide feedback.
> Especially in combination with -unroll-allow-partial, I have observed
> some significant benchmark speedups, but, I have also observed some
> significant slowdowns. I would like to share my thoughts, and hopefully
> get feedback, on next steps.

Hi Hal,

I haven't had a chance to look at your pass in detail, but here are some opinions: :slight_smile:

> 1. "Target Data" for vectorization - I think that in order to improve
> the vectorization quality, the vectorizer will need more information
> about the target. This information could be provided in the form of a
> kind of extended target data. This extended target data might contain:
> - What basic types can be vectorized, and how many of them will fit
> into (the largest) vector registers
> - What classes of operations can be vectorized (division, conversions /
> sign extension, etc. are not always supported)
> - What alignment is necessary for loads and stores
> - Is scalar-to-vector free?

I think that this will be a really important API, but I strongly advocate that you model this after TargetLoweringInfo instead of TargetData. First, TargetData isn't actually a target API (it should be fixed, I filed PR11936 to track this). Second, targets will have to implement imperative code to return precise answers to questions. For example, you'll want something like "what is the cost of a shuffle with this mask" which will be extremely target specific, will depend on what CPU subfeatures are enabled, etc.

This makes sense. What do you think will be the best way of
synchronizing things like CPU subfeatures between this API and the
backend target libraries? They could be linked directly, although I
don't know if we want to do that. tablegen could extract a bunch of this
information into separate objects that get linked into opt.

When you start working on this, I strongly encourage you to propose the API you want here. Start small and add features as you go.

> 2. Feedback between passes - We may to implement a closer coupling
> between optimization passes than currently exists. Specifically, I have
> in mind two things:
> - The vectorizer should communicate more closely with the loop
> unroller. First, the loop unroller should try to unroll to preserve
> maximal load/store alignments. Second, I think it would make a lot of
> sense to be able to unroll and, only if this helps vectorization should
> the unrolled version be kept in preference to the original. With basic
> block vectorization, it is often necessary to (partially) unroll in
> order to vectorize. Even when we also have real loop vectorization,
> however, I still think that it will be important for the loop unroller
> to communicate with the vectorizer.

I really disagree with this, see below.

> 3. Loop vectorization - It would be nice to have, in addition to
> basic-block vectorization, a more-traditional loop vectorization pass. I
> think that we'll need a better loop analysis pass in order for this to
> happen. Some of this was started in LoopDependenceAnalysis, but that
> pass is not yet finished. We'll need something like this to recognize
> affine memory references, etc.

I think that a loop vectorizor and a basic block vectorizer both make perfect sense and are important for different classes of code. However, I don't think that we should go down the path of trying to use a "basic block vectorizor + loop unrolling" serve the purpose of a loop vectorizer. Trying to make a BBVectorizer and a loop unroller play together will be really fragile, because they'll both have to duplicate the same metrics (otherwise, for example, you'd unroll a loop that isn't vectorizable). This will also be a huge hit to compile time.

The only problem with this comes from loops for which unrolling is
necessary to expose vectorization because the memory access pattern is
too complicated to model in more-traditional loop vectorization. This
generally is useful only in cases with a large number of flops per
memory operation (or maybe integer ops too, but I have less experience
with those), so maybe we can design a useful heuristic to handle those
cases. That having been said, unroll+(failed vectorize)+rollback is not
really any more expensive at compile time than unroll+(failed vectorize)
except that the resulting code would run faster (actually it is cheaper
to compile because the optimization/compilation of the unvectorized
unrolled loop code takes longer than the non-unrolled loop). There might
be a clean way of doing this; I'll think about it.

Thanks again,
Hal

1. "Target Data" for vectorization - I think that in order to improve
the vectorization quality, the vectorizer will need more information
about the target. This information could be provided in the form of a
kind of extended target data. This extended target data might contain:
- What basic types can be vectorized, and how many of them will fit
into (the largest) vector registers
- What classes of operations can be vectorized (division, conversions /
sign extension, etc. are not always supported)
- What alignment is necessary for loads and stores
- Is scalar-to-vector free?

I think that this will be a really important API, but I strongly advocate that you model this after TargetLoweringInfo instead of TargetData. First, TargetData isn't actually a target API (it should be fixed, I filed PR11936 to track this). Second, targets will have to implement imperative code to return precise answers to questions. For example, you'll want something like "what is the cost of a shuffle with this mask" which will be extremely target specific, will depend on what CPU subfeatures are enabled, etc.

This makes sense. What do you think will be the best way of
synchronizing things like CPU subfeatures between this API and the
backend target libraries? They could be linked directly, although I
don't know if we want to do that. tablegen could extract a bunch of this
information into separate objects that get linked into opt.

The best model we have at the moment is TargetLoweringInfo, as used by LoopStrengthReduction. The details of this interface aren't a great example to follow for a few reasons (i.e. it has selectiondag specific stuff in it, which is a layering violation) but the idea is sound. This does mean that running "opt -vectorize foo.bc" would not get the same optimization as running clang with the target you want enabled though. We already have this problem with -loop-reduce though.

I think that a loop vectorizor and a basic block vectorizer both make perfect sense and are important for different classes of code. However, I don't think that we should go down the path of trying to use a "basic block vectorizor + loop unrolling" serve the purpose of a loop vectorizer. Trying to make a BBVectorizer and a loop unroller play together will be really fragile, because they'll both have to duplicate the same metrics (otherwise, for example, you'd unroll a loop that isn't vectorizable). This will also be a huge hit to compile time.

The only problem with this comes from loops for which unrolling is
necessary to expose vectorization because the memory access pattern is
too complicated to model in more-traditional loop vectorization. This
generally is useful only in cases with a large number of flops per
memory operation (or maybe integer ops too, but I have less experience
with those), so maybe we can design a useful heuristic to handle those
cases. That having been said, unroll+(failed vectorize)+rollback is not
really any more expensive at compile time than unroll+(failed vectorize)
except that the resulting code would run faster (actually it is cheaper
to compile because the optimization/compilation of the unvectorized
unrolled loop code takes longer than the non-unrolled loop). There might
be a clean way of doing this; I'll think about it.

I don't really understand the issue here, can you elaborate on when this might be a win? I really don't like "speculatively unroll, try to do something, then reroll". That is terrible for compile time and just strikes me as poor design :slight_smile:

-Chris

This seems a bit related to Resource-Directed Loop Pipelining [1] to me. RDLP uses loop unrolling in combination with loop shifting (or peeling) to map a loop-body to a parallel architecture. It was originally focused on VLIW like parallelism but I think that a similar technique may be useful for vectorization.

Cheers,
Roel

[1] http://comjnl.oxfordjournals.org/content/40/6/311.short

I have a super-simple test case 4x4 matrix * 4-vector which gets correctly unrolled, but is not vectorized by -bb-vectorize. (I used llvm 3.1svn)
I attached the test case so you can see what is going wrong there.

2012/2/3 Hal Finkel <[email protected]>

matrix.c (443 Bytes)

>>> 1. "Target Data" for vectorization - I think that in order to improve
>>> the vectorization quality, the vectorizer will need more information
>>> about the target. This information could be provided in the form of a
>>> kind of extended target data. This extended target data might contain:
>>> - What basic types can be vectorized, and how many of them will fit
>>> into (the largest) vector registers
>>> - What classes of operations can be vectorized (division, conversions /
>>> sign extension, etc. are not always supported)
>>> - What alignment is necessary for loads and stores
>>> - Is scalar-to-vector free?
>>
>> I think that this will be a really important API, but I strongly advocate that you model this after TargetLoweringInfo instead of TargetData. First, TargetData isn't actually a target API (it should be fixed, I filed PR11936 to track this). Second, targets will have to implement imperative code to return precise answers to questions. For example, you'll want something like "what is the cost of a shuffle with this mask" which will be extremely target specific, will depend on what CPU subfeatures are enabled, etc.
>
> This makes sense. What do you think will be the best way of
> synchronizing things like CPU subfeatures between this API and the
> backend target libraries? They could be linked directly, although I
> don't know if we want to do that. tablegen could extract a bunch of this
> information into separate objects that get linked into opt.

The best model we have at the moment is TargetLoweringInfo, as used by LoopStrengthReduction. The details of this interface aren't a great example to follow for a few reasons (i.e. it has selectiondag specific stuff in it, which is a layering violation) but the idea is sound. This does mean that running "opt -vectorize foo.bc" would not get the same optimization as running clang with the target you want enabled though. We already have this problem with -loop-reduce though.

>> I think that a loop vectorizor and a basic block vectorizer both make perfect sense and are important for different classes of code. However, I don't think that we should go down the path of trying to use a "basic block vectorizor + loop unrolling" serve the purpose of a loop vectorizer. Trying to make a BBVectorizer and a loop unroller play together will be really fragile, because they'll both have to duplicate the same metrics (otherwise, for example, you'd unroll a loop that isn't vectorizable). This will also be a huge hit to compile time.
>
> The only problem with this comes from loops for which unrolling is
> necessary to expose vectorization because the memory access pattern is
> too complicated to model in more-traditional loop vectorization. This
> generally is useful only in cases with a large number of flops per
> memory operation (or maybe integer ops too, but I have less experience
> with those), so maybe we can design a useful heuristic to handle those
> cases. That having been said, unroll+(failed vectorize)+rollback is not
> really any more expensive at compile time than unroll+(failed vectorize)
> except that the resulting code would run faster (actually it is cheaper
> to compile because the optimization/compilation of the unvectorized
> unrolled loop code takes longer than the non-unrolled loop). There might
> be a clean way of doing this; I'll think about it.

I don't really understand the issue here, can you elaborate on when this might be a win? I really don't like "speculatively unroll, try to do something, then reroll". That is terrible for compile time and just strikes me as poor design :slight_smile:

From Ayal's e-mail, it seems that the gcc vectorizer contains

specialized unrolling code to handle these kinds of cases. With
appropriate refactoring, perhaps that is the best solution.

-Hal

Carl-Philip,

The reason that this does not vectorize is that it cannot vectorize the
stores; this leaves only the mul-add chains (and some chains with
loads), and they only have a depth of 2 (the threshold is 6).

If you give clang -mllvm -bb-vectorize-req-chain-depth=2 then it will
vectorize. The reason the heuristic has such a large default value is to
prevent cases where it costs more to permute all of the necessary values
into and out of the vector registers than is saved by vectorizing. Does
the code generated with -bb-vectorize-req-chain-depth=2 run faster than
the unvectorized code?

The heuristic can certainly be improved, and these kinds of test cases
are very important to that improvement process.

-Hal

I will test your suggestion, but I designed the test case to load the memory directly into <4 x float> registers. So there is absolutely no permutation and other swizzle or move operations. Maybe the heuristic should not only count the depth but also the surrounding load/store operations.

Are the load/store operations vectorized, too? (I designed the test case to completely fit the SSE registers)

2012/2/10 Hal Finkel <[email protected]>

I will test your suggestion, but I designed the test case to load the
memory directly into <4 x float> registers. So there is absolutely no
permutation and other swizzle or move operations. Maybe the heuristic
should not only count the depth but also the surrounding load/store
operations.

I've attached two variants of your file, both which vectorize as you'd
expect. The core difference between these and your original file is that
I added the 'restrict' keyword so that the compiler can assume that the
arrays don't alias (or, in the first case, I made them globals). You
also probably need to specify some alignment information, otherwise the
memory operations will be scalarized in codegen.

-Hal

matrix2.c (424 Bytes)

matrix3.c (480 Bytes)

>>> 1. "Target Data" for vectorization - I think that in order to improve
>>> the vectorization quality, the vectorizer will need more information
>>> about the target. This information could be provided in the form of a
>>> kind of extended target data. This extended target data might contain:
>>> - What basic types can be vectorized, and how many of them will fit
>>> into (the largest) vector registers
>>> - What classes of operations can be vectorized (division, conversions /
>>> sign extension, etc. are not always supported)
>>> - What alignment is necessary for loads and stores
>>> - Is scalar-to-vector free?
>>
>> I think that this will be a really important API, but I strongly advocate that you model this after TargetLoweringInfo instead of TargetData. First, TargetData isn't actually a target API (it should be fixed, I filed PR11936 to track this). Second, targets will have to implement imperative code to return precise answers to questions. For example, you'll want something like "what is the cost of a shuffle with this mask" which will be extremely target specific, will depend on what CPU subfeatures are enabled, etc.
>
> This makes sense. What do you think will be the best way of
> synchronizing things like CPU subfeatures between this API and the
> backend target libraries? They could be linked directly, although I
> don't know if we want to do that. tablegen could extract a bunch of this
> information into separate objects that get linked into opt.

The best model we have at the moment is TargetLoweringInfo, as used by LoopStrengthReduction. The details of this interface aren't a great example to follow for a few reasons (i.e. it has selectiondag specific stuff in it, which is a layering violation) but the idea is sound. This does mean that running "opt -vectorize foo.bc" would not get the same optimization as running clang with the target you want enabled though. We already have this problem with -loop-reduce though.

LoopStrengthReduction is currently created in
TargetPassConfig::addIRPasses (CodeGen/Passes.cpp). Currently the
vectorization pass is created in
PassManagerBuilder::populateModulePassManager (which is used by opt).
Are you suggesting that I move the vectorization pass creation into
CodeGen? Or are you saying that TLI will sometimes be available to the
pass, as it is now, when called from a full-compilation driver (like
clang)? Or are you suggesting that I propose some object like TLI that
might be available in 'opt' even though TLI itself is not available
there?

Thanks again,
Hal

Sure, I'm fully aware that a lot of compilers use the unroll-and-vectorize approach. However, just because other compiler's do it, doesn't mean it isn't a total hack ;-). Is there a principled reason that unrolling is a better approach?

-Chris

I'm suggesting that BBVectorize stay wherever it is (I still haven't looked at it, sorry) and that it use getAnalysis<TargetVectorInfo>(). When run from opt on the command line, it would use the default constructed implementation of TargetVectorInfo (just like "opt -loop-reduce" does). However, when run by clang, a target-specific TargetVectorInfo subclass would be added to the pass pipeline first if it is available.

-Chris

I tested the “restricted” keyword and it works well :slight_smile:

The generated code is a bunch of shufflevector instructions, but after a second -O3 pass, everything looks fine.
This problem is described in my ML post “passes propose passes” and occurs here again. LLVM has so much great passes, but they cannot start again when the code was somewhat simplified :frowning:
Maybe that’s one more reason to tell the pass scheduler to redo some passes to find all optimizations. The core really simplifies to what I expected.

2012/2/13 Hal Finkel <[email protected]>