Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LoopVectorize] Don't discount instructions scalarized due to tail folding #109289

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 8 additions & 4 deletions llvm/lib/Transforms/Vectorize/LoopVectorize.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5501,10 +5501,14 @@ InstructionCost LoopVectorizationCostModel::computePredInstDiscount(
// Scale the total scalar cost by block probability.
ScalarCost /= getReciprocalPredBlockProb();

// Compute the discount. A non-negative discount means the vector version
// of the instruction costs more, and scalarizing would be beneficial.
Discount += VectorCost - ScalarCost;
ScalarCosts[I] = ScalarCost;
// Compute the discount, unless this instruction must be scalarized due to
// tail folding, as then the vector cost is already the scalar cost. A
// non-negative discount means the vector version of the instruction costs
// more, and scalarizing would be beneficial.
if (!foldTailByMasking() || getWideningDecision(I, VF) != CM_Scalarize) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must admit I'm not too familiar with this code and I need some time to understand what effect this change has on the costs, but I'll take a deeper look next week!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK so this patch is saying that if block needs predication due to tail-folding and we've already decided to scalarise the instruction for the vector VF, then we shouldn't apply a discount. However, it feels like there two problems with this:

  1. I don't see why we should restrict this to tail-folding only. If we've also made the widening decision to scalarise for non-tail-folded loops then surely we'd also not want to calculate the discount?
  2. In theory the discount should really be close to zero if we've made the decision to scalarise. Unless I've misunderstood something, it seems like a more fundamental problem here is why VectorCost is larger than ScalarCost for the scenario you're interested in? Perhaps ScalarCost is too low?

I think it would be helpful to add some cost model tests to this patch that have some debug output showing how the costs for each VF change. What seems to be happening with this change is that we're now reporting higher loop costs for VF > 1, and this is leading to the decision not to vectorise at all.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I don't see why we should restrict this to tail-folding only. If we've also made the widening decision to scalarise for non-tail-folded loops then surely we'd also not want to calculate the discount?

This is possibly true, but I tried that and there's a lot more test changes as a result, and briefly looking at them it wasn't immediately obvious if they were better or worse.

  1. In theory the discount should really be close to zero if we've made the decision to scalarise. Unless I've misunderstood something, it seems like a more fundamental problem here is why VectorCost is larger than ScalarCost for the scenario you're interested in? Perhaps ScalarCost is too low?

Looking at low_trip_count_store in llvm/test/Transforms/LoopVectorize/AArch64/conditional-branches-cost.ll the current sequence of events that happens is (for a vector length of 4):

  • getMemInstScalarizationCost is called on the store, which needs to be scalarized and predicated because of tail masking, and calculates a cost of 20, which includes the cost of doing the compare and branch into the predicated block.
  • computePredInstDiscount uses this as the VectorCost, as it uses getInstructionCost which for scalarized instruction returns the cost value in InstsToScalarize
  • The calculation of ScalarCost assumes that the predicated block already exists, so it just calculates the cost of moving the instruction into the predicated block and scalarizing it in there, which it calculates as 4 (for 4 copies of the store)
  • This results in a discount of 16, causing 20-16=4 to be used as the cost of the scalarized store.

ScalarCost is too low for the scalarized store, because it's assuming that the predicated block already exists, but the cost of the predicated block is exactly what we need to take into account to avoid pointless tail folding by masking.

I think it would be helpful to add some cost model tests to this patch that have some debug output showing how the costs for each VF change. What seems to be happening with this change is that we're now reporting higher loop costs for VF > 1, and this is leading to the decision not to vectorise at all.

I'll add these tests.

Discount += VectorCost - ScalarCost;
ScalarCosts[I] = ScalarCost;
}
}

return Discount;
Expand Down
Loading
Loading