Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[mlir][linalg] Support pack consumer fusion with padding semantic for perfect tiling.#149600

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
hanhanW wants to merge1 commit intollvm:main
base:main
Choose a base branch
Loading
fromhanhanW:allow-non-extra-pad-with-perfect-tiling

Conversation

hanhanW
Copy link
Contributor

If the op does not generate extra padding values (i.e.,destDimSize == ceilDiv(srcDimSize, innerTileSize)), it is valid to fuse the pack consumer op.

… perfect tiling.If the op does not generate extra padding values (i.e.,`destDimSize == ceilDiv(srcDimSize, innerTileSize)`), it is valid tofuse the pack consumer op.Signed-off-by: hanhanW <hanhan0912@gmail.com>
@llvmbot
Copy link
Member

@llvm/pr-subscribers-mlir-linalg

Author: Han-Chung Wang (hanhanW)

Changes

If the op does not generate extra padding values (i.e.,destDimSize == ceilDiv(srcDimSize, innerTileSize)), it is valid to fuse the pack consumer op.


Full diff:https://github.com/llvm/llvm-project/pull/149600.diff

2 Files Affected:

  • (modified) mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cpp (+12-7)
  • (modified) mlir/test/Interfaces/TilingInterface/tile-and-fuse-consumer.mlir (+28-16)
diff --git a/mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cpp b/mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cppindex 28d99b130963a..441d7b219d782 100644--- a/mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cpp+++ b/mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cpp@@ -932,13 +932,18 @@ struct PackOpTiling           continue;         }-        // If the dimension needs padding, it is not supported because there are-        // iterations that only write padding values to the whole tile. The-        // consumer fusion is driven by the source, so it is not possible to map-        // an empty slice to the tile.-        bool needExtraPadding =-            ShapedType::isDynamic(destDimSize) || !cstInnerSize ||-            destDimSize * cstInnerSize.value() != srcDimSize;+        // If the dimension needs extra padding, it is not supported because+        // there are iterations that only write padding values to the whole+        // tile. The consumer fusion is driven by the source, so it is not+        // possible to map an empty slice to the tile. Extra padding is not a+        // regular form, and the implementation is being conversative.+        bool needExtraPadding = true;+        if (!ShapedType::isDynamic(srcDimSize) &&+            !ShapedType::isDynamic(destDimSize) && cstInnerSize) {+          needExtraPadding =+              destDimSize >+              (srcDimSize + cstInnerSize.value() - 1) / cstInnerSize.value();+        }         // Prioritize the case that the op already says that it does not need         // padding.         if (!packOp.getPaddingValue())diff --git a/mlir/test/Interfaces/TilingInterface/tile-and-fuse-consumer.mlir b/mlir/test/Interfaces/TilingInterface/tile-and-fuse-consumer.mlirindex cdbca7228ded3..ecce40d1657fe 100644--- a/mlir/test/Interfaces/TilingInterface/tile-and-fuse-consumer.mlir+++ b/mlir/test/Interfaces/TilingInterface/tile-and-fuse-consumer.mlir@@ -596,15 +596,16 @@ module attributes {transform.with_named_sequence} { // -----  // It is valid to fuse the pack op with padding semantics if the tiled-// dimensions do not need padding.+// dimensions do not need extra padding and it is a perfect tiling case.  func.func @fuse_pack_consumer_with_padding_semantics(%arg0: tensor<64x32xf32>, %arg1: tensor<64x32xf32>) -> tensor<22x2x3x16xf32> {-  %0 = scf.forall (%arg2) = (0) to (32) step (16) shared_outs(%arg3 = %arg1) -> (tensor<64x32xf32>) {-    %src = tensor.extract_slice %arg0[0, %arg2] [64, 16] [1, 1] : tensor<64x32xf32> to tensor<64x16xf32>-    %dest = tensor.extract_slice %arg3[0, %arg2] [64, 16] [1, 1] : tensor<64x32xf32> to tensor<64x16xf32>-    %2 = linalg.exp ins(%src : tensor<64x16xf32>) outs(%dest : tensor<64x16xf32>) -> tensor<64x16xf32>+  %0 = scf.forall (%arg2, %arg3) = (0, 0) to (64, 32) step (15, 16) shared_outs(%arg4 = %arg1) -> (tensor<64x32xf32>) {+    %size = affine.min affine_map<(d0) -> (-d0 + 64, 15)>(%arg2)+    %src = tensor.extract_slice %arg0[%arg2, %arg3] [%size, 16] [1, 1] : tensor<64x32xf32> to tensor<?x16xf32>+    %dest = tensor.extract_slice %arg4[%arg2, %arg3] [%size, 16] [1, 1] : tensor<64x32xf32> to tensor<?x16xf32>+    %2 = linalg.exp ins(%src : tensor<?x16xf32>) outs(%dest : tensor<?x16xf32>) -> tensor<?x16xf32>     scf.forall.in_parallel {-      tensor.parallel_insert_slice %2 into %arg3[0, %arg2] [64, 16] [1, 1] : tensor<64x16xf32> into tensor<64x32xf32>+      tensor.parallel_insert_slice %2 into %arg4[%arg2, %arg3] [%size, 16] [1, 1] : tensor<?x16xf32> into tensor<64x32xf32>     }   }   %1 = tensor.empty() : tensor<22x2x3x16xf32>@@ -621,28 +622,39 @@ module attributes {transform.with_named_sequence} {     transform.yield   } }-//      CHECK: #[[PACK_RESULT_MAP:.*]] = affine_map<(d0) -> (d0 floordiv 16)>+//  CHECK-DAG: #[[MAP0:.*]] = affine_map<(d0) -> (-d0 + 64, 15)>+//  CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0) -> (d0 floordiv 3)>+//  CHECK-DAG: #[[MAP2:.*]] = affine_map<(d0) -> (d0 ceildiv 3)>+//  CHECK-DAG: #[[MAP3:.*]] = affine_map<(d0) -> (d0 floordiv 16)> //      CHECK: func.func @fuse_pack_consumer_with_padding_semantics( // CHECK-SAME:     %[[ARG0:[a-zA-Z0-9]+]] // CHECK-SAME:     %[[ARG1:[a-zA-Z0-9]+]] //  CHECK-DAG:   %[[OUT_INIT:.*]] = tensor.empty() : tensor<22x2x3x16xf32> //  CHECK-DAG:   %[[PAD_VAL:.*]] = arith.constant 0.000000e+00 : f32-//      CHECK:   %{{.*}}:2 = scf.forall (%[[IV:.*]]) = (0) to (32) step (16)-// CHECK-SAME:      shared_outs(%[[FIRST_OUT_ARG:.*]] = %[[ARG1]], %[[PACK_OUT_ARG:.*]] = %[[OUT_INIT]])-//      CHECK:      %[[ELEM_SRC:.*]] = tensor.extract_slice %[[ARG0]][0, %[[IV]]] [64, 16] [1, 1]-//      CHECK:      %[[ELEM_DEST:.*]] = tensor.extract_slice %[[FIRST_OUT_ARG]][0, %[[IV]]] [64, 16] [1, 1]+//      CHECK:   %{{.*}}:2 = scf.forall (%[[I:.*]], %[[J:.*]]) = (0, 0) to (64, 32) step (15, 16)+// CHECK-SAME:      shared_outs(%[[ELEM_OUT:.*]] = %[[ARG1]], %[[PACK_OUT:.*]] = %[[OUT_INIT]])+//      CHECK:      %[[SIZE:.+]] = affine.min #[[MAP0]](%[[I]])+//      CHECK:      %[[ELEM_SRC:.*]] = tensor.extract_slice %[[ARG0]]+// CHECK-SAME:        [%[[I]], %[[J]]] [%[[SIZE]], 16] [1, 1]+//      CHECK:      %[[ELEM_DEST:.*]] = tensor.extract_slice %[[ELEM_OUT]]+// CHECK-SAME:        [%[[I]], %[[J]]] [%[[SIZE]], 16] [1, 1] //      CHECK:      %[[ELEM:.*]] = linalg.exp // CHECK-SAME:        ins(%[[ELEM_SRC]] // CHECK-SAME:        outs(%[[ELEM_DEST]]-//  CHECK-DAG:      %[[PACK_RESULT_OFFSET:.*]] = affine.apply #[[PACK_RESULT_MAP]](%[[IV]])-//  CHECK-DAG:      %[[TILED_PACK_DEST:.*]] = tensor.extract_slice %[[PACK_OUT_ARG]][0, %[[PACK_RESULT_OFFSET]], 0, 0] [22, 1, 3, 16] [1, 1, 1, 1]-//      CHECK:      %[[TILED_PACK_OUT:.*]] = linalg.pack %[[ELEM]]+//  CHECK-DAG:      %[[D0_OFFSET:.*]] = affine.apply #[[MAP1]](%[[I]])+//  CHECK-DAG:      %[[D0_SIZE:.*]] = affine.apply #[[MAP2]](%[[SIZE]])+//  CHECK-DAG:      %[[D1_OFFSET:.*]] = affine.apply #[[MAP3]](%[[J]])+//  CHECK-DAG:      %[[PACK_INIT:.*]] = tensor.extract_slice %[[PACK_OUT]]+// CHECK-SAME:        [%[[D0_OFFSET]], %[[D1_OFFSET]], 0, 0] [%[[D0_SIZE]], 1, 3, 16] [1, 1, 1, 1]+//      CHECK:      %[[PACK:.*]] = linalg.pack %[[ELEM]] // CHECK-SAME:        padding_value(%[[PAD_VAL]] : f32) // CHECK-SAME:        inner_dims_pos = [0, 1] inner_tiles = [3, 16] // CHECK-SAME:        into %[[TILED_PACK_DEST]] //      CHECK:      scf.forall.in_parallel {-//      CHECK:          tensor.parallel_insert_slice %[[GENERIC_OUT]] into %[[FIRST_OUT_ARG]][0, %[[IV]]] [64, 16] [1, 1]-//      CHECK:          tensor.parallel_insert_slice %[[TILED_PACK_OUT]] into %[[PACK_OUT_ARG]][0, %[[PACK_RESULT_OFFSET]], 0, 0] [22, 1, 3, 16] [1, 1, 1, 1]+//      CHECK:          tensor.parallel_insert_slice %[[ELEM]] into %[[ELEM_OUT]]+// CHECK-SAME:            [%[[I]], %[[J]]] [%[[SIZE]], 16] [1, 1]+//      CHECK:          tensor.parallel_insert_slice %[[PACK]] into %[[PACK_OUT]]+// CHECK-SAME:            [%[[D0_OFFSET]], %[[D1_OFFSET]], 0, 0] [%[[D0_SIZE]], 1, 3, 16] [1, 1, 1, 1]  // -----

@llvmbot
Copy link
Member

@llvm/pr-subscribers-mlir

Author: Han-Chung Wang (hanhanW)

Changes

If the op does not generate extra padding values (i.e.,destDimSize == ceilDiv(srcDimSize, innerTileSize)), it is valid to fuse the pack consumer op.


Full diff:https://github.com/llvm/llvm-project/pull/149600.diff

2 Files Affected:

  • (modified) mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cpp (+12-7)
  • (modified) mlir/test/Interfaces/TilingInterface/tile-and-fuse-consumer.mlir (+28-16)
diff --git a/mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cpp b/mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cppindex 28d99b130963a..441d7b219d782 100644--- a/mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cpp+++ b/mlir/lib/Dialect/Linalg/Transforms/TilingInterfaceImpl.cpp@@ -932,13 +932,18 @@ struct PackOpTiling           continue;         }-        // If the dimension needs padding, it is not supported because there are-        // iterations that only write padding values to the whole tile. The-        // consumer fusion is driven by the source, so it is not possible to map-        // an empty slice to the tile.-        bool needExtraPadding =-            ShapedType::isDynamic(destDimSize) || !cstInnerSize ||-            destDimSize * cstInnerSize.value() != srcDimSize;+        // If the dimension needs extra padding, it is not supported because+        // there are iterations that only write padding values to the whole+        // tile. The consumer fusion is driven by the source, so it is not+        // possible to map an empty slice to the tile. Extra padding is not a+        // regular form, and the implementation is being conversative.+        bool needExtraPadding = true;+        if (!ShapedType::isDynamic(srcDimSize) &&+            !ShapedType::isDynamic(destDimSize) && cstInnerSize) {+          needExtraPadding =+              destDimSize >+              (srcDimSize + cstInnerSize.value() - 1) / cstInnerSize.value();+        }         // Prioritize the case that the op already says that it does not need         // padding.         if (!packOp.getPaddingValue())diff --git a/mlir/test/Interfaces/TilingInterface/tile-and-fuse-consumer.mlir b/mlir/test/Interfaces/TilingInterface/tile-and-fuse-consumer.mlirindex cdbca7228ded3..ecce40d1657fe 100644--- a/mlir/test/Interfaces/TilingInterface/tile-and-fuse-consumer.mlir+++ b/mlir/test/Interfaces/TilingInterface/tile-and-fuse-consumer.mlir@@ -596,15 +596,16 @@ module attributes {transform.with_named_sequence} { // -----  // It is valid to fuse the pack op with padding semantics if the tiled-// dimensions do not need padding.+// dimensions do not need extra padding and it is a perfect tiling case.  func.func @fuse_pack_consumer_with_padding_semantics(%arg0: tensor<64x32xf32>, %arg1: tensor<64x32xf32>) -> tensor<22x2x3x16xf32> {-  %0 = scf.forall (%arg2) = (0) to (32) step (16) shared_outs(%arg3 = %arg1) -> (tensor<64x32xf32>) {-    %src = tensor.extract_slice %arg0[0, %arg2] [64, 16] [1, 1] : tensor<64x32xf32> to tensor<64x16xf32>-    %dest = tensor.extract_slice %arg3[0, %arg2] [64, 16] [1, 1] : tensor<64x32xf32> to tensor<64x16xf32>-    %2 = linalg.exp ins(%src : tensor<64x16xf32>) outs(%dest : tensor<64x16xf32>) -> tensor<64x16xf32>+  %0 = scf.forall (%arg2, %arg3) = (0, 0) to (64, 32) step (15, 16) shared_outs(%arg4 = %arg1) -> (tensor<64x32xf32>) {+    %size = affine.min affine_map<(d0) -> (-d0 + 64, 15)>(%arg2)+    %src = tensor.extract_slice %arg0[%arg2, %arg3] [%size, 16] [1, 1] : tensor<64x32xf32> to tensor<?x16xf32>+    %dest = tensor.extract_slice %arg4[%arg2, %arg3] [%size, 16] [1, 1] : tensor<64x32xf32> to tensor<?x16xf32>+    %2 = linalg.exp ins(%src : tensor<?x16xf32>) outs(%dest : tensor<?x16xf32>) -> tensor<?x16xf32>     scf.forall.in_parallel {-      tensor.parallel_insert_slice %2 into %arg3[0, %arg2] [64, 16] [1, 1] : tensor<64x16xf32> into tensor<64x32xf32>+      tensor.parallel_insert_slice %2 into %arg4[%arg2, %arg3] [%size, 16] [1, 1] : tensor<?x16xf32> into tensor<64x32xf32>     }   }   %1 = tensor.empty() : tensor<22x2x3x16xf32>@@ -621,28 +622,39 @@ module attributes {transform.with_named_sequence} {     transform.yield   } }-//      CHECK: #[[PACK_RESULT_MAP:.*]] = affine_map<(d0) -> (d0 floordiv 16)>+//  CHECK-DAG: #[[MAP0:.*]] = affine_map<(d0) -> (-d0 + 64, 15)>+//  CHECK-DAG: #[[MAP1:.*]] = affine_map<(d0) -> (d0 floordiv 3)>+//  CHECK-DAG: #[[MAP2:.*]] = affine_map<(d0) -> (d0 ceildiv 3)>+//  CHECK-DAG: #[[MAP3:.*]] = affine_map<(d0) -> (d0 floordiv 16)> //      CHECK: func.func @fuse_pack_consumer_with_padding_semantics( // CHECK-SAME:     %[[ARG0:[a-zA-Z0-9]+]] // CHECK-SAME:     %[[ARG1:[a-zA-Z0-9]+]] //  CHECK-DAG:   %[[OUT_INIT:.*]] = tensor.empty() : tensor<22x2x3x16xf32> //  CHECK-DAG:   %[[PAD_VAL:.*]] = arith.constant 0.000000e+00 : f32-//      CHECK:   %{{.*}}:2 = scf.forall (%[[IV:.*]]) = (0) to (32) step (16)-// CHECK-SAME:      shared_outs(%[[FIRST_OUT_ARG:.*]] = %[[ARG1]], %[[PACK_OUT_ARG:.*]] = %[[OUT_INIT]])-//      CHECK:      %[[ELEM_SRC:.*]] = tensor.extract_slice %[[ARG0]][0, %[[IV]]] [64, 16] [1, 1]-//      CHECK:      %[[ELEM_DEST:.*]] = tensor.extract_slice %[[FIRST_OUT_ARG]][0, %[[IV]]] [64, 16] [1, 1]+//      CHECK:   %{{.*}}:2 = scf.forall (%[[I:.*]], %[[J:.*]]) = (0, 0) to (64, 32) step (15, 16)+// CHECK-SAME:      shared_outs(%[[ELEM_OUT:.*]] = %[[ARG1]], %[[PACK_OUT:.*]] = %[[OUT_INIT]])+//      CHECK:      %[[SIZE:.+]] = affine.min #[[MAP0]](%[[I]])+//      CHECK:      %[[ELEM_SRC:.*]] = tensor.extract_slice %[[ARG0]]+// CHECK-SAME:        [%[[I]], %[[J]]] [%[[SIZE]], 16] [1, 1]+//      CHECK:      %[[ELEM_DEST:.*]] = tensor.extract_slice %[[ELEM_OUT]]+// CHECK-SAME:        [%[[I]], %[[J]]] [%[[SIZE]], 16] [1, 1] //      CHECK:      %[[ELEM:.*]] = linalg.exp // CHECK-SAME:        ins(%[[ELEM_SRC]] // CHECK-SAME:        outs(%[[ELEM_DEST]]-//  CHECK-DAG:      %[[PACK_RESULT_OFFSET:.*]] = affine.apply #[[PACK_RESULT_MAP]](%[[IV]])-//  CHECK-DAG:      %[[TILED_PACK_DEST:.*]] = tensor.extract_slice %[[PACK_OUT_ARG]][0, %[[PACK_RESULT_OFFSET]], 0, 0] [22, 1, 3, 16] [1, 1, 1, 1]-//      CHECK:      %[[TILED_PACK_OUT:.*]] = linalg.pack %[[ELEM]]+//  CHECK-DAG:      %[[D0_OFFSET:.*]] = affine.apply #[[MAP1]](%[[I]])+//  CHECK-DAG:      %[[D0_SIZE:.*]] = affine.apply #[[MAP2]](%[[SIZE]])+//  CHECK-DAG:      %[[D1_OFFSET:.*]] = affine.apply #[[MAP3]](%[[J]])+//  CHECK-DAG:      %[[PACK_INIT:.*]] = tensor.extract_slice %[[PACK_OUT]]+// CHECK-SAME:        [%[[D0_OFFSET]], %[[D1_OFFSET]], 0, 0] [%[[D0_SIZE]], 1, 3, 16] [1, 1, 1, 1]+//      CHECK:      %[[PACK:.*]] = linalg.pack %[[ELEM]] // CHECK-SAME:        padding_value(%[[PAD_VAL]] : f32) // CHECK-SAME:        inner_dims_pos = [0, 1] inner_tiles = [3, 16] // CHECK-SAME:        into %[[TILED_PACK_DEST]] //      CHECK:      scf.forall.in_parallel {-//      CHECK:          tensor.parallel_insert_slice %[[GENERIC_OUT]] into %[[FIRST_OUT_ARG]][0, %[[IV]]] [64, 16] [1, 1]-//      CHECK:          tensor.parallel_insert_slice %[[TILED_PACK_OUT]] into %[[PACK_OUT_ARG]][0, %[[PACK_RESULT_OFFSET]], 0, 0] [22, 1, 3, 16] [1, 1, 1, 1]+//      CHECK:          tensor.parallel_insert_slice %[[ELEM]] into %[[ELEM_OUT]]+// CHECK-SAME:            [%[[I]], %[[J]]] [%[[SIZE]], 16] [1, 1]+//      CHECK:          tensor.parallel_insert_slice %[[PACK]] into %[[PACK_OUT]]+// CHECK-SAME:            [%[[D0_OFFSET]], %[[D1_OFFSET]], 0, 0] [%[[D0_SIZE]], 1, 3, 16] [1, 1, 1, 1]  // -----

Comment on lines +935 to +946
// If the dimension needs extra padding, it is not supported because
// there are iterations that only write padding values to the whole
// tile. The consumer fusion is driven by the source, so it is not
// possible to map an empty slice to the tile. Extra padding is not a
// regular form, and the implementation is being conversative.
bool needExtraPadding = true;
if (!ShapedType::isDynamic(srcDimSize) &&
!ShapedType::isDynamic(destDimSize) && cstInnerSize) {
needExtraPadding =
destDimSize >
(srcDimSize + cstInnerSize.value() - 1) / cstInnerSize.value();
}
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This is going to be hard if we allow extra padding. E.g., it could result in dynamic shape easily and it is known that the extra padding is not required after some tiling. I'm considering to restrict the semantics a little. I'm working on a local patch, and can send an RFC to discourse.

cc@MaheshRavishankar@Max191@egebeysel@adam-smnk@banach-space

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Upstream prototype:#149624

Testing IREE downstream project here:iree-org/iree#21424

I'll take a look at the result on Monday.

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@adam-smnkadam-smnkAwaiting requested review from adam-smnk

@egebeyselegebeyselAwaiting requested review from egebeysel

@dcaballedcaballeAwaiting requested review from dcaballedcaballe is a code owner

@nicolasvasilachenicolasvasilacheAwaiting requested review from nicolasvasilachenicolasvasilache is a code owner

@rengolinrengolinAwaiting requested review from rengolinrengolin is a code owner

Assignees
No one assigned
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

2 participants
@hanhanW@llvmbot

[8]ページ先頭

©2009-2025 Movatter.jp