Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on Jan 23, 2023. It is now read-only.
/coreclrPublic archive

Implement the SSE hardware intrinsics.#15538

Merged
tannergooding merged 32 commits intodotnet:masterfrom
tannergooding:sse-intrinsics
Jan 17, 2018
Merged

Implement the SSE hardware intrinsics.#15538
tannergooding merged 32 commits intodotnet:masterfrom
tannergooding:sse-intrinsics

Conversation

@tannergooding
Copy link
Member

@tannergoodingtannergooding commentedDec 15, 2017
edited
Loading

This implements basic codegen support for all currently declared SSE intrinsics except for theStore intrinsics.

fiigii, bbowyersmyth, 4creators, omariom, iamcarbon, pentp, am11, nietras, and MendelMonteiro reacted with thumbs up emoji
{
return IsAVXInstruction(ins) &&
(ins == INS_movlpd || ins == INS_movlps || ins == INS_movhpd || ins == INS_movhps || ins == INS_movss ||
ins ==INS_movlhps || ins ==INS_sqrtss || ins == INS_sqrtsd || ins == INS_cvtss2sd || ins == INS_cvtsd2ss);
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

INS_movlhps was incorrectly listed asDstSrcSrc, which will cause it to generate a different result on AVX vs non-AVX machines.


void emitter::emitIns_SIMD_R_R_R_I(instruction ins, regNumber reg, regNumber reg1, regNumber reg2, int ival, var_types simdtype)
{
if (UseVEXEncoding() && reg1 != reg)
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

For both this andemitIns_SIMD_R_R_R above: Do we have a work item tracking support forreg2 being a memory operand when it is properly aligned?

Currently we are generating things like

C4E1791000           vmovupd  xmm0, xmmword ptr [rax]C4E14858F0           vaddps   xmm6, xmm0

While in some cases we could generatevaddps xmm6, [rax]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

reg2 being a memory operand when it is properly aligned?

And in what circumstances will the memory operand be aligned? Only for local variables I guess.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Yeah. It definitely can't be used everywhere, and isn't something I think needs to be immediate.

But, for the cases where the memory will always be aligned (locals, is one example), it would be useful to generate the more efficient code

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Only for local variables I guess

@tannergooding@mikedn

It is not true. In general devs writing vectorized code would peel off start of loops to align data and do all the remaining calculations on aligned data until the end of the loops where processing may require change of logic.

This means that all data accessed in themain loops will be aligned and doing anything butvaddps xmm6, [rax] would be an unacceptable loss of the CPU cycles.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

In general devs writing vectorized code would peel off start of loops to align data

The key here is "in general". The JIT has no way to know that the address is aligned or not and it cannot make such assumptions simply because "in general" developers do that.

I suspect that it will work if the developer usesLoadAligned to indicate that the memory address is aligned.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I agree that, "in general", users writing performant code will try to keep accesses aligned to avoid reads/writes that cross cache line or page boundaries.

However, as@mikedn points out, assuming this is incorrect and detecting it is likely hard.

It's probably worth a separate discussion (will log a bug in a bit) but I believe it is something we should try to enable (maybe a special intrinsic, compiler hint, or attribute to tell the jit memory accesses will be aligned for a given method).

4creators reacted with thumbs up emoji

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

It's probably worth a separate discussion (will log a bug in a bit)

It is the right approach I think as this not that easy to get it right

emit->emitIns_SIMD_R_R_R(INS_addps, targetReg, op1Reg, op2Reg, TYP_SIMD16);
break;

case NI_SSE_And:
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

All the cases (so far) are essentially the same with only the instruction (or immediate value) differing.

It might be worthwhile including this info in thehwintrinsiclist and having them be entries in an array for constant time lookup...

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Yes, unlike "classic" instructions SSE/AVX instructions have a more consistent format so it should possible to drive most codegen from tables. Basically you need to add to the list a "form" value that can be used to select betweenR_R_R andR_R_R_I and of course, the instruction itself.

Well, there's also the problem of memory operands but that's something that the IR node will indicate. SoR_R_R in the hw intrinsic list really stands forR_R_RM.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I was actually just thinking of having a single array, something where the entries are {instruction, immediate}. Helper functions (likeget_IsSupported) would haveINS_INVALID and instructions without an immediate would probably have-1.

There can then be two general groupings. One for instructions without an immediate (those that useSIMD_R_R_R) and one for instructions with an immediate (those that useSIMD_R_R_R_I).

Other instructions (such asLoad) which have different semantics can be in their own case grouping.

I don't think we necessarily need aform value, unless you think that would be more efficient.

Copy link

@fiigiifiigiiDec 15, 2017
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I do not want to tabulate the codegen of hw intrinsic, which won't always be such simple.
For example,intrinsicID +baseType is not enough to indicate codegen for some intrinsic, I would add a new field intoGenTreeHWIntrinsic to represent overloads that would be used here.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I don't think we necessarily need a form value, unless you think that would be more efficient.

Theform thing was just a suggestion. Any solution that cuts down the amount of repeated code should be good enough, provided that it doesn't cause other problem.

I would add a new field into GenTreeHWIntrinsic to represent overloads that would be used here.

Hmm? It's not clear why that would be better.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I don't think we should do it for all instructions, but a number of them (especially the binary arithmetic and comparison operators) will all be fairly common and follow roughly the same code path.

Cutting down duplicate code will likely be worthwhile for these operations.

Copy link

@fiigiifiigiiDec 15, 2017
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Hmm? It's not clear why that would be better.

@mikedn For example, we need an additional field to distinghuish these two intrinsics (intrinsicID +baseType is not enough).

Vector128<float>GatherVector128(float*baseAddress,Vector128<long>index,bytescale);Vector128<float>GatherVector128(float*baseAddress,Vector128<int>index,bytescale);

My current plan is to add aoverloads field inGenTreeHWIntrinsic to bring that information from importer to CodeGen.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Here is the complete list ofGatherVector128 to show the necessity.

/// <summary>/// __m128i _mm_i32gather_epi32 (int const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<int>GatherVector128(int*baseAddress,Vector128<int>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128i _mm_i32gather_epi32 (int const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<uint>GatherVector128(uint*baseAddress,Vector128<int>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128i _mm_i32gather_epi64 (__int64 const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<long>GatherVector128(long*baseAddress,Vector128<int>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128i _mm_i32gather_epi64 (__int64 const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<ulong>GatherVector128(ulong*baseAddress,Vector128<int>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128 _mm_i32gather_ps (float const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<float>GatherVector128(float*baseAddress,Vector128<int>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128d _mm_i32gather_pd (double const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<double>GatherVector128(double*baseAddress,Vector128<int>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128i _mm_i64gather_epi32 (int const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<int>GatherVector128(int*baseAddress,Vector128<long>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128i _mm_i64gather_epi32 (int const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<uint>GatherVector128(uint*baseAddress,Vector128<long>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128i _mm_i64gather_epi64 (__int64 const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<long>GatherVector128(long*baseAddress,Vector128<long>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128i _mm_i64gather_epi64 (__int64 const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<ulong>GatherVector128(ulong*baseAddress,Vector128<long>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128 _mm_i64gather_ps (float const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<float>GatherVector128(float*baseAddress,Vector128<long>index,bytescale)=>GatherVector128(baseAddress,index,scale);/// <summary>/// __m128d _mm_i64gather_pd (double const* base_addr, __m128i vindex, const int scale)/// </summary>publicstaticunsafeVector128<double>GatherVector128(double*baseAddress,Vector128<long>index,bytescale)=>GatherVector128(baseAddress,index,scale);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

intrinsicID + baseType is not enough

Hmm, can't we use a different intrinsic id for each overload?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Hmm, can't we use a different intrinsic id for each overload?

That will break the intrinsic searching because a method name has to map to one intrinsic id in the current[Intrinsic] system.

HARDWARE_INTRINSIC(SSE_Multiply, "Multiply", SSE)
HARDWARE_INTRINSIC(SSE_Or, "Or", SSE)
HARDWARE_INTRINSIC(SSE_Reciprocal, "Reciprocal", SSE)
HARDWARE_INTRINSIC(SSE_ReciprocalSqrt, "ReciprocalSquareRoot", SSE)
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This name is getting fixed in#15471, will update after it is merged.

@tannergooding
Copy link
MemberAuthor

I split and categorized the hardware intrinsic tests to make it easier to tell what coverage exists for each ISA and each instruction.

A good portion of the code could be shared and it might be worth refactoring sooner rather than later.

@tannergooding
Copy link
MemberAuthor

For theCompareGreaterThan intrinsics, the hardware manual recommends using the inverse operations or by using software emulation (swapping the operands and using a different predicate).

Using the inverse operations is preferred, correct? I was trying to think of reasons why software emulation would be suggested, but came up drawing a blank....

@tannergooding
Copy link
MemberAuthor

FYI.@fiigii

I figured I would start on this while waiting for the scalar intrinsic PR to get merged. I believe you are going to be focusing on AVX/AVX2, so I hopefully am not duplicating work you've already done here 😄

fiigii reacted with thumbs up emojifiigii reacted with laugh emojifiigii reacted with heart emoji

void emitter::emitIns_SIMD_R_R_R_I(
instruction ins, regNumber reg, regNumber reg1, regNumber reg2, int ival, var_types simdtype)
{
if (UseVEXEncoding() && reg1 != reg)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Parenthesize reg1 != reg

@fiigii
Copy link

@tannergooding Thank you so much for the work!

so I hopefully am not duplicating work you've already done here

My next step is to implementLoad/Store of AVX/AVX2/SSE/SSE2 because these memory-access intrinsics need a bit more infrastructure.

I recommend you start with simple intrinsics that generate single instruction (e.g., arithmetic, bit manipulation, and comparison, etc.). Then certain helper intrinsics can be expanded to combinations of these simple intrinsics.

tannergooding reacted with thumbs up emoji

@fiigii
Copy link

For both this and emitIns_SIMD_R_R_R above: Do we have a work item tracking support for reg2 being a memory operand when it is properly aligned?

Yes, this feature is in my plan, we have to enable containment analysis on hardware intrinsic in the future. In C++, users can indicate this codegen by deferring pointers of vectors, but C# has no pointer to managed type. So this feature would considerably impact CQ.
BTW, not only "reg2 being a memory operand", but also reg1 to foldStore*.

@mikedn
Copy link

but C# has no pointer to managed type

Hmm, it does have that -ref. Akabyref in JIT.

@fiigii
Copy link

this feature is in my plan, we have to enable containment analysis on hardware intrinsic in the future

To be clear.

varres=Avx.Add(vec1,Avx.Load(address));

should generate

vaddpsymm1,ymm2,[adress]

@mikedn
Copy link

To be clear...

Yes. Though I suppose it should also work withAvx.Add(vec1, *p) wherep is a pointer of appropriate vector type. In fact it's not clear ifLoad is actually needed. OnlyLoadAligned is obviously needed because there's no way to encode the fact that the pointer is properly aligned in IL.

@fiigii
Copy link

fiigii commentedDec 15, 2017
edited
Loading

Hmm, it does have that - ref. Aka byref in JIT.

But that requires overloading intrinsics withref and allow convertingref element to/fromref Vector128/256<T>.

@mikedn
Copy link

But that requires overloading intrinsics with ref and allow converting ref element to/from ref Vector128/256.

Only the load/store intrinsics would need that. I think that was discussed but I don't remember how exactly we ended up only with pointer versions. Anyway, doesn't matter to much, they can be added if deemed useful at some point.

@fiigii
Copy link

fiigii commentedDec 15, 2017
edited
Loading

Only the load/store intrinsics would need that.

I meant C++ can compile:

// vec1 is a __m256 and addr is float*auto a = _mm256_add_ps(vec1, *(__m256*)addr);

to

vaddpsymm0,ymm0, ymmword ptr[addr]

But, in C#, we have to rely on foldingLoad into the comsuer intrinsics.

@fiigii
Copy link

Only LoadAligned is obviously needed because there's no way to encode the fact that the pointer is properly aligned in IL.

Most of the aligned and unaligned instructions have no difference of performance and function on modern Intel CPUs when the load/store fall into one cache-line. Users should be responsible for alignment of pointers.

@mikedn
Copy link

I meant C++ can compile:

Yes, and similar code will work in C# except that you can't generateaddps xmm0, xmmword ptr [addr] because SSE2 requires memory operands to be aligned and the JIT has no way to know if addr is 16 byte aligned or not.

But, in C#, we have to rely on folding Load into the comsuer intrinsics.

For SSE2, yes.Sse2.Add(v1, Sse2.LoadAligned(addr)) should generateaddps xmm0, xmmword ptr[addr].

@fiigii
Copy link

you can't generate addps xmm0, xmmword ptr [addr] because SSE2 requires memory operands to be aligned and the JIT has no way to know if addr is 16 byte aligned or not.

Sse2.Add(v1, Sse2.LoadAligned(addr)) should generate addps xmm0, xmmword ptr[addr].

@mikednaddps does not require 16-byte alignment. OnlyType 1 instructions require explicit alignment:
screen shot 2017-12-17 at 10 56 10 am

@tannergooding
Copy link
MemberAuthor

@fiigii, there are some instructions, like addsubpd, where they explicitly indicate (on the actual instruction page) that the source operand must be aligned or #GP will be generated.

It would be useful to determine which section of the manual is accurate.

fiigii reacted with thumbs up emoji

@mikedn
Copy link

Hmm, too lazy to dig through the docs at this hour. Let's test:

__m128 a[42];__asm addps xmm0, xmmword ptr [a+1]

Crashes with "read access violation". Runs fine with[a] or[a+16].
But:

__m128 a[42];__asm vaddps xmm0, xmm0, xmmword ptr [a+1]

Runs fine.

So yes, classic SSE2 (no VEX encoded) instructions require 16 byte alignment.

fiigii and pentp reacted with thumbs up emoji

@fiigii
Copy link

fiigii commentedDec 17, 2017
edited
Loading

@mikedn@tannergooding Sorry, I was worry.Type 2 instructions also require alignment withlegacy SSE encoding
screen shot 2017-12-17 at 12 08 02 pm

@fiigii
Copy link

In summary, we should foldLoad/Store for all VEX-encoding instructions.
BTW,@mikedn could you point any code that I can learn enabling containment analysis for SIMD operations? I am not very familiar with this part.

@mikedn
Copy link

could you point any code that I can learn enabling containment analysis for SIMD operations

See the variousContainCheckX functions in lowerxarch.cpp. There'sContainCheckSIMD but it's rather limited in what it does. SeeContainCheckBinary too.

In short:

  • If the second operand is a memory operand (GT_IND usually) you can make it contained by callingMakeSrcContained.
  • If the second operand is a local variable then you may callSetRegOptional to tell the register allocator that it does not need to load the variable into a register (if it's not already in a register).
  • If the first operand is a memory operand/local variable and the instruction is commutative then you could try to swap the operands to take advantage of containment
  • AFAIR there aren't many, if any, SSE/AVX instruction that have a RMW (read/modify/write) form (e.g. noaddps xmmword ptr [mem], xmm0). That will simplify things as handling this form is rather convoluted.
fiigii reacted with thumbs up emoji

@tannergooding
Copy link
MemberAuthor

Note: CI results are inaccurate untilhttps://github.com/dotnet/coreclr/issues/15618 is resolved as theIsSupported check is returning false and no intrinsics actually have code being generated.

I am testing locally to ensure Windows x64 is working, but that is all the validation I am doing at this time.

@tannergooding
Copy link
MemberAuthor

@fiigii,@mikedn.

ImplementedSSE_Shuffle usingGT_LIST for the operands. It's fairly straightforward and can be fairly well isolated from the other code paths.

@fiigii
Copy link

@tannergooding I will change the importer and codgen of HW intrinsic to a table-driven implementation, so that@mikedn's solution ("add a integer data member to GenTreeHWIntrinsic and store the immediate in it.") may be better to unify the exception code.

tannergooding reacted with thumbs up emoji

@tannergooding
Copy link
MemberAuthor

ImplementedSet,SetAll, andSetZero intrinsics.

All that's left now isLoad,Store, andStaticCast (and improving codegen in general).


argList = argList->Rest();
op3 = argList->Current();
ival = op3->AsIntConCommon()->IconValue();
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This fails for theShuffle_r test, but passes for theShuffle_ro test. Seems to be becauseop3 is aGT_CAST when optimizations are not enabled.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

👍

Copy link
MemberAuthor

@tannergoodingtannergoodingDec 27, 2017
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Looks like this is because we aren't forcing expansion of the intrinsics (on first pass), which causes other downstream processing on the args to occur.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

assert(sig->numArgs == 1);
op1 = impSIMDPopStack(TYP_FLOAT);
retNode = gtNewSimdHWIntrinsicNode(TYP_SIMD16, op1, intrinsic, TYP_FLOAT, 16);
break;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think all the helper intrinsics should be expanded to other "real" intrinsics in the importer. That would simplify the subsequent optimization work.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Do you mean expand to the existing SIMD intrinsics used bySystem.Numerics.Vector<T>?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

No, I mean helper intrinsic should be converted to other HW intrinsics. For example,Sse.SetAll(v) may be expanded toSse.Shuffle(aNewLoaclVar(v), 0),Sse.SetZero<float> may be expanded toSse.Xor(v, v).

…4WithTruncation, Single, and Vector128Single scalar intrinsics
…ithTruncation, Single, and Vector128Single scalar intrinsics
…d scalar intrinsics for op: eq, gt, ge, lt, le, and ne
…scalar intrinsics for op: eq, gt, ge, lt, le, and ne
@tannergooding
Copy link
MemberAuthor

Had to resolve a merge conflict with the System.Math.Round intrinsic PR (#14736).

Went ahead and added the function header comments and changedisNumericType to bevarTypeIsArithmetic instead.

Will merge once tests are green.

4creators and fiigii reacted with thumbs up emoji

@tannergoodingtannergooding merged commite522589 intodotnet:masterJan 17, 2018
@sandreenko
Copy link

Do not want to bother you, but frankejit is broken again:
f:\codegenmirror\src\ndp\clr\src\jit\emitxarch.cpp(4140): error C3861: 'emitHandleMemOp': identifier not found [F:\CodegenMirror\src\NDP\clr\src\jit\frankenjit\frankenjit.nativeproj]

So new functions should be under#ifndef LEGACY_BACKEND

@tannergooding
Copy link
MemberAuthor

Do not want to bother you, but frankejit is broken again:

No worries. I should have remembered I needed toifdef the R_R_A_I method after just fixing the others.

Fix should be#15900. I tested locally on the TFS sources and it looks like JIT builds succesfully.

Sign up for freeto subscribe to this conversation on GitHub. Already have an account?Sign in.

Reviewers

4 more reviewers

@4creators4creators4creators left review comments

@fiigiifiigiifiigii requested changes

@mikednmikednmikedn left review comments

@CarolEidtCarolEidtCarolEidt approved these changes

Reviewers whose approvals may not affect merge requirements

Assignees

No one assigned

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

7 participants

@tannergooding@fiigii@mikedn@4creators@CarolEidt@sandreenko@jkotas

Comments


[8]ページ先頭

©2009-2026 Movatter.jp