Movatterモバイル変換


[0]ホーム

URL:


LLVM 20.0.0git
Public Member Functions |Static Public Member Functions |List of all members
llvm::RISCVTargetLowering Class Reference

#include "Target/RISCV/RISCVISelLowering.h"

Inheritance diagram for llvm::RISCVTargetLowering:
Inheritance graph
[legend]

Public Member Functions

 RISCVTargetLowering (constTargetMachine &TM,constRISCVSubtarget &STI)
 
constRISCVSubtargetgetSubtarget ()const
 
bool getTgtMemIntrinsic (IntrinsicInfo &Info,constCallInst &I,MachineFunction &MF,unsigned Intrinsic)const override
 Given an intrinsic, checks if on the target the intrinsic will need to map to a MemIntrinsicNode (touches memory).
 
bool isLegalAddressingMode (constDataLayout &DL,constAddrMode &AM,Type *Ty,unsigned AS,Instruction *I=nullptr)const override
 Return true if the addressing mode represented by AM is legal for this target, for a load/store of the specified type.
 
bool isLegalICmpImmediate (int64_t Imm)const override
 Return true if the specified immediate is legal icmp immediate, that is the target has icmp instructions which can compare a register against the immediate without having to materialize the immediate into a register.
 
bool isLegalAddImmediate (int64_t Imm)const override
 Return true if the specified immediate is legal add immediate, that is the target has add instructions which can add a register with the immediate without having to materialize the immediate into a register.
 
bool isTruncateFree (Type *SrcTy,Type *DstTy)const override
 Return true if it's free to truncate a value of type FromTy to type ToTy.
 
bool isTruncateFree (EVT SrcVT,EVT DstVT)const override
 
bool isTruncateFree (SDValue Val,EVT VT2)const override
 Return true if truncating the specific node Val to type VT2 is free.
 
bool isZExtFree (SDValue Val,EVT VT2)const override
 Return true if zero-extending the specific node Val to type VT2 is free (either because it's implicitly zero-extended such asARM ldrb / ldrh or because it's folded such asX86 zero-extending loads).
 
bool isSExtCheaperThanZExt (EVT SrcVT,EVT DstVT)const override
 Return true if sign-extension from FromTy to ToTy is cheaper than zero-extension.
 
bool signExtendConstant (constConstantInt *CI)const override
 Return true if this constant should be sign extended when promoting to a larger type.
 
bool isCheapToSpeculateCttz (Type *Ty)const override
 Return true if it is cheap to speculate a call to intrinsic cttz.
 
bool isCheapToSpeculateCtlz (Type *Ty)const override
 Return true if it is cheap to speculate a call to intrinsic ctlz.
 
bool isMaskAndCmp0FoldingBeneficial (constInstruction &AndI)const override
 Return if the target supports combining a chain like:
 
bool hasAndNotCompare (SDValueY)const override
 Return true if the target should transform: (X & Y) == Y —> (~X & Y) == 0 (X & Y) != Y —> (~X & Y) != 0.
 
bool hasBitTest (SDValueX,SDValueY)const override
 Return true if the target has a bit-test instruction: (X & (1 << Y)) ==/!= 0 This knowledge can be used to prevent breaking the pattern, or creating it if it could be recognized.
 
bool shouldProduceAndByConstByHoistingConstFromShiftsLHSOfAnd (SDValueX,ConstantSDNode *XC,ConstantSDNode *CC,SDValueY,unsigned OldShiftOpcode,unsigned NewShiftOpcode,SelectionDAG &DAG)const override
 Given the pattern (X & (C l>>/<< Y)) ==/!= 0 return true if it should be transformed into: ((X <</l>> Y) & C) ==/!= 0 WARNING: if 'X' is a constant, the fold may deadlock! FIXME: we could avoid passing XC, but we can't useisConstOrConstSplat() here because it can end up being not linked in.
 
bool shouldScalarizeBinop (SDValue VecOp)const override
 Try to convert an extract element of a vector binary operation into an extract element followed by a scalar operation.
 
bool isOffsetFoldingLegal (constGlobalAddressSDNode *GA)const override
 Return true if folding a constant offset with the given GlobalAddress is legal.
 
int getLegalZfaFPImm (constAPFloat &Imm,EVT VT)const
 
bool isFPImmLegal (constAPFloat &Imm,EVT VT,bool ForCodeSize)const override
 Returns true if the target can instruction select the specified FP immediate natively.
 
bool isExtractSubvectorCheap (EVT ResVT,EVT SrcVT,unsignedIndex)const override
 Return true if EXTRACT_SUBVECTOR is cheap for extracting this result type from this source type with this index.
 
bool isIntDivCheap (EVT VT,AttributeList Attr)const override
 Return true if integer divide is usually cheaper than a sequence of several shifts, adds, and multiplies for this target.
 
bool preferScalarizeSplat (SDNode *N)const override
 
bool softPromoteHalfType ()const override
 
MVT getRegisterTypeForCallingConv (LLVMContext &Context,CallingConv::IDCC,EVT VT)const override
 Return the register type for a givenMVT, ensuring vectors are treated as a series of gpr sized integers.
 
unsigned getNumRegisters (LLVMContext &Context,EVT VT, std::optional<MVT > RegisterVT=std::nullopt)const override
 Return the number of registers for a givenMVT, for inline assembly.
 
unsigned getNumRegistersForCallingConv (LLVMContext &Context,CallingConv::IDCC,EVT VT)const override
 Return the number of registers for a givenMVT, ensuring vectors are treated as a series of gpr sized integers.
 
unsigned getVectorTypeBreakdownForCallingConv (LLVMContext &Context,CallingConv::IDCC,EVT VT,EVT &IntermediateVT,unsigned &NumIntermediates,MVT &RegisterVT)const override
 Certain targets such as MIPS require that some types such as vectors are always broken down into scalars in some contexts.
 
bool shouldFoldSelectWithIdentityConstant (unsigned BinOpcode,EVT VT)const override
 Return true if pulling a binary operation into a select with an identity constant is profitable.
 
bool isShuffleMaskLegal (ArrayRef< int > M,EVT VT)const override
 Return true if the given shuffle mask can be codegen'd directly, or if it should be stack expanded.
 
bool isMultiStoresCheaperThanBitsMerge (EVT LTy,EVT HTy)const override
 Return true if it is cheaper to split the store of a merged int val from a pair of smaller values into multiple stores.
 
bool shouldExpandBuildVectorWithShuffles (EVT VT,unsigned DefinedValues)const override
 
bool shouldExpandCttzElements (EVT VT)const override
 Return true if the @llvm.experimental.cttz.elts intrinsic should be expanded using generic code inSelectionDAGBuilder.
 
InstructionCost getLMULCost (MVT VT)const
 Return the cost of LMUL for linear operations.
 
InstructionCost getVRGatherVVCost (MVT VT)const
 Return the cost of a vrgather.vv instruction for the type VT.
 
InstructionCost getVRGatherVICost (MVT VT)const
 Return the cost of a vrgather.vi (or vx) instruction for the type VT.
 
InstructionCost getVSlideVXCost (MVT VT)const
 Return the cost of a vslidedown.vx or vslideup.vx instruction for the type VT.
 
InstructionCost getVSlideVICost (MVT VT)const
 Return the cost of a vslidedown.vi or vslideup.vi instruction for the type VT.
 
SDValue LowerOperation (SDValueOp,SelectionDAG &DAG)const override
 This callback is invoked for operations that are unsupported by the target, which are registered to use 'custom' lowering, and whose defined values are all legal.
 
void ReplaceNodeResults (SDNode *N,SmallVectorImpl<SDValue > &Results,SelectionDAG &DAG)const override
 This callback is invoked when a node result type is illegal for the target, and the operation was registered to use 'custom' lowering for that result type.
 
SDValue PerformDAGCombine (SDNode *N,DAGCombinerInfo &DCI)const override
 This method will be invoked for all target nodes and for any target-independent nodes that the target has registered with invoke it for.
 
bool targetShrinkDemandedConstant (SDValueOp,constAPInt &DemandedBits,constAPInt &DemandedElts,TargetLoweringOpt &TLO)const override
 
void computeKnownBitsForTargetNode (constSDValueOp,KnownBits &Known,constAPInt &DemandedElts,constSelectionDAG &DAG,unsignedDepth)const override
 Determine which of the bits specified in Mask are known to be either zero or one and return them in the KnownZero/KnownOne bitsets.
 
unsigned ComputeNumSignBitsForTargetNode (SDValueOp,constAPInt &DemandedElts,constSelectionDAG &DAG,unsignedDepth)const override
 This method can be implemented by targets that want to expose additional information about sign bits to the DAGCombiner.
 
bool canCreateUndefOrPoisonForTargetNode (SDValueOp,constAPInt &DemandedElts,constSelectionDAG &DAG,boolPoisonOnly,bool ConsiderFlags,unsignedDepth)const override
 Return true if Op can create undef or poison from non-undef & non-poison operands.
 
constConstantgetTargetConstantFromLoad (LoadSDNode *LD)const override
 This method returns the constant pool value that will be loaded by LD.
 
constchargetTargetNodeName (unsigned Opcode)const override
 This method returns the name of a target specific DAG node.
 
MachineMemOperand::Flags getTargetMMOFlags (constInstruction &I)const override
 This callback is used to inspect load/store instructions and add target-specificMachineMemOperand flags to them.
 
MachineMemOperand::Flags getTargetMMOFlags (constMemSDNode &Node)const override
 This callback is used to inspect load/storeSDNode.
 
bool areTwoSDNodeTargetMMOFlagsMergeable (constMemSDNode &NodeX,constMemSDNode &NodeY)const override
 Return true if it is valid to merge the TargetMMOFlags in two SDNodes.
 
ConstraintType getConstraintType (StringRef Constraint)const override
 getConstraintType - Given a constraint letter, return the type of constraint it is for this target.
 
InlineAsm::ConstraintCode getInlineAsmMemConstraint (StringRef ConstraintCode)const override
 
std::pair<unsigned,constTargetRegisterClass * > getRegForInlineAsmConstraint (constTargetRegisterInfo *TRI,StringRef Constraint,MVT VT)const override
 Given a physical register constraint (e.g.
 
void LowerAsmOperandForConstraint (SDValueOp,StringRef Constraint, std::vector<SDValue > &Ops,SelectionDAG &DAG)const override
 Lower the specified operand into the Ops vector.
 
MachineBasicBlockEmitInstrWithCustomInserter (MachineInstr &MI,MachineBasicBlock *BB)const override
 This method should be implemented by targets that mark instructions with the 'usesCustomInserter' flag.
 
void AdjustInstrPostInstrSelection (MachineInstr &MI,SDNode *Node)const override
 This method should be implemented by targets that mark instructions with the 'hasPostISelHook' flag.
 
EVT getSetCCResultType (constDataLayout &DL,LLVMContext &Context,EVT VT)const override
 Return the ValueType of the result of SETCC operations.
 
bool shouldFormOverflowOp (unsigned Opcode,EVT VT,bool MathUsed)const override
 Try to convert math with an overflow comparison into the corresponding DAG node operation.
 
bool storeOfVectorConstantIsCheap (bool IsZero,EVT MemVT,unsigned NumElem,unsigned AddrSpace)const override
 Return true if it is expected to be cheaper to do a store of vector constant with the given size and type for the address space than to store the individual scalar element constants.
 
bool convertSetCCLogicToBitwiseLogic (EVT VT)const override
 Use bitwise logic to make pairs of compares more efficient.
 
bool convertSelectOfConstantsToMath (EVT VT)const override
 Return true if a select of constants (select Cond, C1, C2) should be transformed into simple math ops with the condition value.
 
bool isCtpopFast (EVT VT)const override
 Return true if ctpop instruction is fast.
 
unsigned getCustomCtpopCost (EVT VT,ISD::CondCodeCond)const override
 Return the maximum number of "x & (x - 1)" operations that can be done instead of deferring to a custom CTPOP.
 
bool preferZeroCompareBranch ()const override
 Return true if the heuristic to prefer icmp eq zero should be used in code gen prepare.
 
bool shouldInsertFencesForAtomic (constInstruction *I)const override
 WhetherAtomicExpandPass should automatically insert fences and reduce ordering for this atomic.
 
InstructionemitLeadingFence (IRBuilderBase &Builder,Instruction *Inst,AtomicOrdering Ord)const override
 Inserts in the IR a target-specific intrinsic specifying a fence.
 
InstructionemitTrailingFence (IRBuilderBase &Builder,Instruction *Inst,AtomicOrdering Ord)const override
 
bool isFMAFasterThanFMulAndFAdd (constMachineFunction &MF,EVT VT)const override
 Return true if an FMA operation is faster than a pair of fmul and fadd instructions.
 
ISD::NodeType getExtendForAtomicOps ()const override
 Returns how the platform's atomic operations are extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).
 
ISD::NodeType getExtendForAtomicCmpSwapArg ()const override
 Returns how the platform's atomic compare and swap expects its comparison value to be extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).
 
bool shouldTransformSignedTruncationCheck (EVT XVT,unsigned KeptBits)const override
 Should we tranform the IR-optimal check for whether given truncation down into KeptBits would be truncating or not: (add x, (1 << (KeptBits-1))) srccond (1 << KeptBits) Into it's more traditional form: ((x << C) a>> C) dstcond x Return true if we should transform.
 
TargetLowering::ShiftLegalizationStrategy preferredShiftLegalizationStrategy (SelectionDAG &DAG,SDNode *N,unsigned ExpansionFactor)const override
 
bool isDesirableToCommuteWithShift (constSDNode *N,CombineLevel Level)const override
 Return true if it is profitable to move this shift by a constant amount through its operand, adjusting any immediate operands as necessary to preserve semantics.
 
Register getExceptionPointerRegister (constConstant *PersonalityFn)const override
 If a physical register, this returns the register that receives the exception address on entry to an EH pad.
 
Register getExceptionSelectorRegister (constConstant *PersonalityFn)const override
 If a physical register, this returns the register that receives the exception typeid on entry to a landing pad.
 
bool shouldExtendTypeInLibCall (EVTType)const override
 Returns true if arguments should be extended in lib calls.
 
bool shouldSignExtendTypeInLibCall (Type *Ty,bool IsSigned)const override
 Returns true if arguments should be sign-extended in lib calls.
 
Register getRegisterByName (constchar *RegName,LLT VT,constMachineFunction &MF)const override
 Returns the register with the specified architectural or ABI name.
 
SDValue LowerFormalArguments (SDValue Chain,CallingConv::ID CallConv,bool IsVarArg,constSmallVectorImpl<ISD::InputArg > &Ins,constSDLoc &DL,SelectionDAG &DAG,SmallVectorImpl<SDValue > &InVals)const override
 This hook must be implemented to lower the incoming (formal) arguments, described by the Ins array, into the specified DAG.
 
bool CanLowerReturn (CallingConv::ID CallConv,MachineFunction &MF,bool IsVarArg,constSmallVectorImpl<ISD::OutputArg > &Outs,LLVMContext &Context,constType *RetTy)const override
 This hook should be implemented to check whether the return values described by the Outs array can fit into the return registers.
 
SDValue LowerReturn (SDValue Chain,CallingConv::ID CallConv,bool IsVarArg,constSmallVectorImpl<ISD::OutputArg > &Outs,constSmallVectorImpl<SDValue > &OutVals,constSDLoc &DL,SelectionDAG &DAG)const override
 This hook must be implemented to lower outgoing return values, described by the Outs array, into the specified DAG.
 
SDValue LowerCall (TargetLowering::CallLoweringInfo &CLI,SmallVectorImpl<SDValue > &InVals)const override
 This hook must be implemented to lower calls into the specified DAG.
 
bool shouldConvertConstantLoadToIntImm (constAPInt &Imm,Type *Ty)const override
 Return true if it is beneficial to convert a load of a constant to just the constant itself.
 
bool isUsedByReturnOnly (SDNode *N,SDValue &Chain)const override
 Return true if result of the specified node is used by a return node only.
 
bool mayBeEmittedAsTailCall (constCallInst *CI)const override
 Return true if the target may be able emit the call instruction as a tail call.
 
bool shouldConsiderGEPOffsetSplit ()const override
 
bool decomposeMulByConstant (LLVMContext &Context,EVT VT,SDValueC)const override
 Return true if it is profitable to transform an integer multiplication-by-constant into simpler operations like shifts and adds.
 
bool isMulAddWithConstProfitable (SDValue AddNode,SDValue ConstNode)const override
 Return true if it may be profitable to transform (mul (add x, c1), c2) -> (add (mul x, c2), c1*c2).
 
TargetLowering::AtomicExpansionKind shouldExpandAtomicRMWInIR (AtomicRMWInst *AI)const override
 Returns how the IR-level AtomicExpand pass should expand the given AtomicRMW, if at all.
 
ValueemitMaskedAtomicRMWIntrinsic (IRBuilderBase &Builder,AtomicRMWInst *AI,Value *AlignedAddr,Value *Incr,Value *Mask,Value *ShiftAmt,AtomicOrdering Ord)const override
 Perform a masked atomicrmw using a target-specific intrinsic.
 
TargetLowering::AtomicExpansionKind shouldExpandAtomicCmpXchgInIR (AtomicCmpXchgInst *CI)const override
 Returns how the given atomic cmpxchg should be expanded by the IR-level AtomicExpand pass.
 
ValueemitMaskedAtomicCmpXchgIntrinsic (IRBuilderBase &Builder,AtomicCmpXchgInst *CI,Value *AlignedAddr,Value *CmpVal,Value *NewVal,Value *Mask,AtomicOrdering Ord)const override
 Perform a masked cmpxchg using a target-specific intrinsic.
 
bool allowsMisalignedMemoryAccesses (EVT VT,unsigned AddrSpace=0,Align Alignment=Align(1),MachineMemOperand::Flags Flags=MachineMemOperand::MONone,unsigned *Fast=nullptr)const override
 Returns true if the target allows unaligned memory accesses of the specified type.
 
EVT getOptimalMemOpType (constMemOp &Op,constAttributeList &FuncAttributes)const override
 Returns the target specific optimal type for load and store operations as a result of memset, memcpy, and memmove lowering.
 
bool splitValueIntoRegisterParts (SelectionDAG &DAG,constSDLoc &DL,SDValue Val,SDValue *Parts,unsigned NumParts,MVT PartVT, std::optional<CallingConv::ID >CC)const override
 Target-specific splitting of values into parts that fit a register storing a legal type.
 
SDValue joinRegisterPartsIntoValue (SelectionDAG &DAG,constSDLoc &DL,constSDValue *Parts,unsigned NumParts,MVT PartVT,EVT ValueVT, std::optional<CallingConv::ID >CC)const override
 Target-specific combining of register parts into its original value.
 
SDValue computeVLMax (MVT VecVT,constSDLoc &DL,SelectionDAG &DAG)const
 
MVT getContainerForFixedLengthVector (MVT VT)const
 
bool shouldRemoveExtendFromGSIndex (SDValue Extend,EVT DataVT)const override
 
bool isLegalElementTypeForRVV (EVT ScalarTy)const
 
bool shouldConvertFpToSat (unsignedOp,EVT FPVT,EVT VT)const override
 Should we generate fp_to_si_sat and fp_to_ui_sat from type FPVT to type VT from min(max(fptoi)) saturation patterns.
 
unsigned getJumpTableEncoding ()const override
 Return the entry encoding for a jump table in the current function.
 
constMCExprLowerCustomJumpTableEntry (constMachineJumpTableInfo *MJTI,constMachineBasicBlock *MBB,unsigned uid,MCContext &Ctx)const override
 
bool isVScaleKnownToBeAPowerOfTwo ()const override
 Return true only if vscale must be a power of two.
 
bool getIndexedAddressParts (SDNode *Op,SDValue &Base,SDValue &Offset,ISD::MemIndexedMode &AM,SelectionDAG &DAG)const
 
bool getPreIndexedAddressParts (SDNode *N,SDValue &Base,SDValue &Offset,ISD::MemIndexedMode &AM,SelectionDAG &DAG)const override
 Returns true by value, base pointer and offset pointer and addressing mode by reference if the node's address can be legally represented as pre-indexed load / store address.
 
bool getPostIndexedAddressParts (SDNode *N,SDNode *Op,SDValue &Base,SDValue &Offset,ISD::MemIndexedMode &AM,SelectionDAG &DAG)const override
 Returns true by value, base pointer and offset pointer and addressing mode by reference if this node can be combined with a load / store to form a post-indexed load / store.
 
bool isLegalScaleForGatherScatter (uint64_t Scale,uint64_t ElemSize)const override
 
ValuegetIRStackGuard (IRBuilderBase &IRB)const override
 If the target has a standard location for the stack protector cookie, returns the address of that location.
 
bool isLegalInterleavedAccessType (VectorType *VTy,unsigned Factor,Align Alignment,unsigned AddrSpace,constDataLayout &)const
 Returns whether or not generating a interleaved load/store intrinsic for this type will be legal.
 
bool isLegalStridedLoadStore (EVT DataType,Align Alignment)const
 Return true if a stride load store of the given result type and alignment is legal.
 
unsigned getMaxSupportedInterleaveFactor ()const override
 Get the maximum supported factor for interleaved memory accesses.
 
bool fallBackToDAGISel (constInstruction &Inst)const override
 
bool lowerInterleavedLoad (LoadInst *LI,ArrayRef<ShuffleVectorInst * > Shuffles,ArrayRef<unsigned > Indices,unsigned Factor)const override
 Lower an interleaved load into a vlsegN intrinsic.
 
bool lowerInterleavedStore (StoreInst *SI,ShuffleVectorInst *SVI,unsigned Factor)const override
 Lower an interleaved store into a vssegN intrinsic.
 
bool lowerDeinterleaveIntrinsicToLoad (LoadInst *LI,ArrayRef<Value * > DeinterleaveValues)const override
 Lower a deinterleave intrinsic to a target specific load intrinsic.
 
bool lowerInterleaveIntrinsicToStore (StoreInst *SI,ArrayRef<Value * > InterleaveValues)const override
 Lower an interleave intrinsic to a target specific store intrinsic.
 
bool supportKCFIBundles ()const override
 Return true if the target supports kcfi operand bundles.
 
SDValue expandIndirectJTBranch (constSDLoc &dl,SDValueValue,SDValueAddr, int JTI,SelectionDAG &DAG)const override
 Expands target specific indirect branch for the case ofJumpTable expansion.
 
MachineInstrEmitKCFICheck (MachineBasicBlock &MBB,MachineBasicBlock::instr_iterator &MBBI,constTargetInstrInfo *TII)const override
 
bool hasInlineStackProbe (constMachineFunction &MF)const override
 True if stack clash protection is enabled for this functions.
 
unsigned getStackProbeSize (constMachineFunction &MF,Align StackAlign)const
 
MachineBasicBlockemitDynamicProbedAlloc (MachineInstr &MI,MachineBasicBlock *MBB)const
 
- Public Member Functions inherited fromllvm::TargetLowering
 TargetLowering (constTargetLowering &)=delete
 
TargetLoweringoperator= (constTargetLowering &)=delete
 
 TargetLowering (constTargetMachine &TM)
 NOTE: TheTargetMachine owns TLOF.
 
bool isPositionIndependent ()const
 
virtualbool isSDNodeSourceOfDivergence (constSDNode *N,FunctionLoweringInfo *FLI,UniformityInfo *UA)const
 
virtualbool isReassocProfitable (SelectionDAG &DAG,SDValue N0,SDValue N1)const
 
virtualbool isReassocProfitable (MachineRegisterInfo &MRI,Register N0,Register N1)const
 
virtualbool isSDNodeAlwaysUniform (constSDNode *N)const
 
virtualbool getPreIndexedAddressParts (SDNode *,SDValue &,SDValue &,ISD::MemIndexedMode &,SelectionDAG &)const
 Returns true by value, base pointer and offset pointer and addressing mode by reference if the node's address can be legally represented as pre-indexed load / store address.
 
virtualbool getPostIndexedAddressParts (SDNode *,SDNode *,SDValue &,SDValue &,ISD::MemIndexedMode &,SelectionDAG &)const
 Returns true by value, base pointer and offset pointer and addressing mode by reference if this node can be combined with a load / store to form a post-indexed load / store.
 
virtualbool isIndexingLegal (MachineInstr &MI,RegisterBase,RegisterOffset,bool IsPre,MachineRegisterInfo &MRI)const
 Returns true if the specified base+offset is a legal indexed addressing mode for this target.
 
virtualunsigned getJumpTableEncoding ()const
 Return the entry encoding for a jump table in the current function.
 
virtualMVT getJumpTableRegTy (constDataLayout &DL)const
 
virtualconstMCExprLowerCustomJumpTableEntry (constMachineJumpTableInfo *,constMachineBasicBlock *,unsigned,MCContext &)const
 
virtualSDValue getPICJumpTableRelocBase (SDValue Table,SelectionDAG &DAG)const
 Returns relocation base for the given PIC jumptable.
 
virtualconstMCExprgetPICJumpTableRelocBaseExpr (constMachineFunction *MF,unsigned JTI,MCContext &Ctx)const
 This returns the relocation base for the given PIC jumptable, the same as getPICJumpTableRelocBase, but as anMCExpr.
 
virtualbool isOffsetFoldingLegal (constGlobalAddressSDNode *GA)const
 Return true if folding a constant offset with the given GlobalAddress is legal.
 
virtualbool isInlineAsmTargetBranch (constSmallVectorImpl<StringRef > &AsmStrs,unsigned OpNo)const
 On x86, return true if the operand with index OpNo is a CALL or JUMP instruction, which can use either a memory constraint or an address constraint.
 
bool isInTailCallPosition (SelectionDAG &DAG,SDNode *Node,SDValue &Chain)const
 Check whether a given call node is in tail position within its function.
 
void softenSetCCOperands (SelectionDAG &DAG,EVT VT,SDValue &NewLHS,SDValue &NewRHS,ISD::CondCode &CCCode,constSDLoc &DL,constSDValue OldLHS,constSDValue OldRHS)const
 Soften the operands of a comparison.
 
void softenSetCCOperands (SelectionDAG &DAG,EVT VT,SDValue &NewLHS,SDValue &NewRHS,ISD::CondCode &CCCode,constSDLoc &DL,constSDValue OldLHS,constSDValue OldRHS,SDValue &Chain,bool IsSignaling=false)const
 
virtualSDValue visitMaskedLoad (SelectionDAG &DAG,constSDLoc &DL,SDValue Chain,MachineMemOperand *MMO,SDValue &NewLoad,SDValuePtr,SDValue PassThru,SDValue Mask)const
 
virtualSDValue visitMaskedStore (SelectionDAG &DAG,constSDLoc &DL,SDValue Chain,MachineMemOperand *MMO,SDValuePtr,SDValue Val,SDValue Mask)const
 
std::pair<SDValue,SDValuemakeLibCall (SelectionDAG &DAG,RTLIB::Libcall LC,EVT RetVT,ArrayRef<SDValue > Ops,MakeLibCallOptions CallOptions,constSDLoc &dl,SDValue Chain=SDValue())const
 Returns a pair of (return value, chain).
 
bool parametersInCSRMatch (constMachineRegisterInfo &MRI,constuint32_t *CallerPreservedMask,constSmallVectorImpl<CCValAssign > &ArgLocs,constSmallVectorImpl<SDValue > &OutVals)const
 Check whether parameters to a call that are passed in callee saved registers are the same as from the calling function.
 
virtualbool findOptimalMemOpLowering (std::vector<EVT > &MemOps,unsigned Limit,constMemOp &Op,unsigned DstAS,unsigned SrcAS,constAttributeList &FuncAttributes)const
 Determines the optimal series of memory ops to replace the memset / memcpy.
 
bool ShrinkDemandedConstant (SDValueOp,constAPInt &DemandedBits,constAPInt &DemandedElts,TargetLoweringOpt &TLO)const
 Check to see if the specified operand of the specified instruction is a constant integer.
 
bool ShrinkDemandedConstant (SDValueOp,constAPInt &DemandedBits,TargetLoweringOpt &TLO)const
 Helper wrapper around ShrinkDemandedConstant, demanding all elements.
 
virtualbool targetShrinkDemandedConstant (SDValueOp,constAPInt &DemandedBits,constAPInt &DemandedElts,TargetLoweringOpt &TLO)const
 
bool ShrinkDemandedOp (SDValueOp,unsignedBitWidth,constAPInt &DemandedBits,TargetLoweringOpt &TLO)const
 Convert x+y to (VT)((SmallVT)x+(SmallVT)y) if the casts are free.
 
bool SimplifyDemandedBits (SDValueOp,constAPInt &DemandedBits,constAPInt &DemandedElts,KnownBits &Known,TargetLoweringOpt &TLO,unsignedDepth=0,bool AssumeSingleUse=false)const
 Look at Op.
 
bool SimplifyDemandedBits (SDValueOp,constAPInt &DemandedBits,KnownBits &Known,TargetLoweringOpt &TLO,unsignedDepth=0,bool AssumeSingleUse=false)const
 Helper wrapper around SimplifyDemandedBits, demanding all elements.
 
bool SimplifyDemandedBits (SDValueOp,constAPInt &DemandedBits,DAGCombinerInfo &DCI)const
 Helper wrapper around SimplifyDemandedBits.
 
bool SimplifyDemandedBits (SDValueOp,constAPInt &DemandedBits,constAPInt &DemandedElts,DAGCombinerInfo &DCI)const
 Helper wrapper around SimplifyDemandedBits.
 
SDValue SimplifyMultipleUseDemandedBits (SDValueOp,constAPInt &DemandedBits,constAPInt &DemandedElts,SelectionDAG &DAG,unsignedDepth=0)const
 More limited version of SimplifyDemandedBits that can be used to "lookthrough" ops that don't contribute to the DemandedBits/DemandedElts - bitwise ops etc.
 
SDValue SimplifyMultipleUseDemandedBits (SDValueOp,constAPInt &DemandedBits,SelectionDAG &DAG,unsignedDepth=0)const
 Helper wrapper around SimplifyMultipleUseDemandedBits, demanding all elements.
 
SDValue SimplifyMultipleUseDemandedVectorElts (SDValueOp,constAPInt &DemandedElts,SelectionDAG &DAG,unsignedDepth=0)const
 Helper wrapper around SimplifyMultipleUseDemandedBits, demanding all bits from only some vector elements.
 
bool SimplifyDemandedVectorElts (SDValueOp,constAPInt &DemandedEltMask,APInt &KnownUndef,APInt &KnownZero,TargetLoweringOpt &TLO,unsignedDepth=0,bool AssumeSingleUse=false)const
 Look at Vector Op.
 
bool SimplifyDemandedVectorElts (SDValueOp,constAPInt &DemandedElts,DAGCombinerInfo &DCI)const
 Helper wrapper around SimplifyDemandedVectorElts.
 
virtualbool shouldSimplifyDemandedVectorElts (SDValueOp,constTargetLoweringOpt &TLO)const
 Return true if the target supports simplifying demanded vector elements by converting them to undefs.
 
virtual void computeKnownBitsForTargetNode (constSDValueOp,KnownBits &Known,constAPInt &DemandedElts,constSelectionDAG &DAG,unsignedDepth=0)const
 Determine which of the bits specified in Mask are known to be either zero or one and return them in the KnownZero/KnownOne bitsets.
 
virtual void computeKnownBitsForTargetInstr (GISelKnownBits &Analysis,Register R,KnownBits &Known,constAPInt &DemandedElts,constMachineRegisterInfo &MRI,unsignedDepth=0)const
 Determine which of the bits specified in Mask are known to be either zero or one and return them in the KnownZero/KnownOne bitsets.
 
virtualAlign computeKnownAlignForTargetInstr (GISelKnownBits &Analysis,Register R,constMachineRegisterInfo &MRI,unsignedDepth=0)const
 Determine the known alignment for the pointer valueR.
 
virtual void computeKnownBitsForFrameIndex (int FIOp,KnownBits &Known,constMachineFunction &MF)const
 Determine which of the bits of FrameIndexFIOp are known to be 0.
 
virtualunsigned ComputeNumSignBitsForTargetNode (SDValueOp,constAPInt &DemandedElts,constSelectionDAG &DAG,unsignedDepth=0)const
 This method can be implemented by targets that want to expose additional information about sign bits to the DAGCombiner.
 
virtualunsigned computeNumSignBitsForTargetInstr (GISelKnownBits &Analysis,Register R,constAPInt &DemandedElts,constMachineRegisterInfo &MRI,unsignedDepth=0)const
 This method can be implemented by targets that want to expose additional information about sign bits to GlobalISel combiners.
 
virtualbool SimplifyDemandedVectorEltsForTargetNode (SDValueOp,constAPInt &DemandedElts,APInt &KnownUndef,APInt &KnownZero,TargetLoweringOpt &TLO,unsignedDepth=0)const
 Attempt to simplify any target nodes based on the demanded vector elements, returning true on success.
 
virtualbool SimplifyDemandedBitsForTargetNode (SDValueOp,constAPInt &DemandedBits,constAPInt &DemandedElts,KnownBits &Known,TargetLoweringOpt &TLO,unsignedDepth=0)const
 Attempt to simplify any target nodes based on the demanded bits/elts, returning true on success.
 
virtualSDValue SimplifyMultipleUseDemandedBitsForTargetNode (SDValueOp,constAPInt &DemandedBits,constAPInt &DemandedElts,SelectionDAG &DAG,unsignedDepth)const
 More limited version of SimplifyDemandedBits that can be used to "lookthrough" ops that don't contribute to the DemandedBits/DemandedElts - bitwise ops etc.
 
virtualbool isGuaranteedNotToBeUndefOrPoisonForTargetNode (SDValueOp,constAPInt &DemandedElts,constSelectionDAG &DAG,boolPoisonOnly,unsignedDepth)const
 Return true if this function can prove thatOp is never poison and, ifPoisonOnly is false, does not have undef bits.
 
virtualbool canCreateUndefOrPoisonForTargetNode (SDValueOp,constAPInt &DemandedElts,constSelectionDAG &DAG,boolPoisonOnly,bool ConsiderFlags,unsignedDepth)const
 Return true if Op can create undef or poison from non-undef & non-poison operands.
 
SDValue buildLegalVectorShuffle (EVT VT,constSDLoc &DL,SDValue N0,SDValue N1,MutableArrayRef< int > Mask,SelectionDAG &DAG)const
 Tries to build a legal vector shuffle using the provided parameters or equivalent variations.
 
virtualconstConstantgetTargetConstantFromLoad (LoadSDNode *LD)const
 This method returns the constant pool value that will be loaded by LD.
 
virtualbool isKnownNeverNaNForTargetNode (SDValueOp,constSelectionDAG &DAG,bool SNaN=false,unsignedDepth=0)const
 IfSNaN is false,.
 
virtualbool isSplatValueForTargetNode (SDValueOp,constAPInt &DemandedElts,APInt &UndefElts,constSelectionDAG &DAG,unsignedDepth=0)const
 Return true if vectorOp has the same value across allDemandedElts, indicating any elements which may be undef in the outputUndefElts.
 
virtualbool isTargetCanonicalConstantNode (SDValueOp)const
 Returns true if the given Opc is considered a canonical constant for the target, which should not be transformed back into a BUILD_VECTOR.
 
bool isConstTrueVal (SDValueN)const
 Return if the N is a constant or constant vector equal to the true value fromgetBooleanContents().
 
bool isConstFalseVal (SDValueN)const
 Return if the N is a constant or constant vector equal to the false value fromgetBooleanContents().
 
bool isExtendedTrueVal (constConstantSDNode *N,EVT VT,bool SExt)const
 Return ifN is a True value when extended toVT.
 
SDValue SimplifySetCC (EVT VT,SDValue N0,SDValue N1,ISD::CondCodeCond,bool foldBooleans,DAGCombinerInfo &DCI,constSDLoc &dl)const
 Try to simplify a setcc built with the specified operands and cc.
 
virtualSDValue unwrapAddress (SDValueN)const
 
virtualbool isGAPlusOffset (SDNode *N,constGlobalValue *&GA, int64_t &Offset)const
 Returns true (and theGlobalValue and the offset) if the node is a GlobalAddress + offset.
 
virtualSDValue PerformDAGCombine (SDNode *N,DAGCombinerInfo &DCI)const
 This method will be invoked for all target nodes and for any target-independent nodes that the target has registered with invoke it for.
 
virtualbool isDesirableToCommuteWithShift (constSDNode *N,CombineLevel Level)const
 Return true if it is profitable to move this shift by a constant amount through its operand, adjusting any immediate operands as necessary to preserve semantics.
 
virtualbool isDesirableToCommuteWithShift (constMachineInstr &MI,bool IsAfterLegal)const
 GlobalISel - return true if it is profitable to move this shift by a constant amount through its operand, adjusting any immediate operands as necessary to preserve semantics.
 
virtualbool isDesirableToPullExtFromShl (constMachineInstr &MI)const
 GlobalISel - return true if it's profitable to perform the combine: shl ([sza]ext x), y => zext (shl x, y)
 
virtualAndOrSETCCFoldKind isDesirableToCombineLogicOpOfSETCC (constSDNode *LogicOp,constSDNode *SETCC0,constSDNode *SETCC1)const
 
virtualbool isDesirableToCommuteXorWithShift (constSDNode *N)const
 Return true if it is profitable to combine an XOR of a logical shift to create a logical shift of NOT.
 
virtualbool isTypeDesirableForOp (unsigned,EVT VT)const
 Return true if the target has native support for the specified value type and it is 'desirable' to use the type for the given node type.
 
virtualbool isDesirableToTransformToIntegerOp (unsigned,EVT)const
 Return true if it is profitable for dag combiner to transform a floating point op of specified opcode to a equivalent op of an integer type.
 
virtualbool IsDesirableToPromoteOp (SDValue,EVT &)const
 This method query the target whether it is beneficial for dag combiner to promote the specified node.
 
virtualbool supportSwiftError ()const
 Return true if the target supports swifterror attribute.
 
virtualbool supportSplitCSR (MachineFunction *MF)const
 Return true if the target supports that a subset of CSRs for the given machine function is handled explicitly via copies.
 
virtualbool supportKCFIBundles ()const
 Return true if the target supports kcfi operand bundles.
 
virtualbool supportPtrAuthBundles ()const
 Return true if the target supports ptrauth operand bundles.
 
virtual void initializeSplitCSR (MachineBasicBlock *Entry)const
 Perform necessary initialization to handle a subset of CSRs explicitly via copies.
 
virtual void insertCopiesSplitCSR (MachineBasicBlock *Entry,constSmallVectorImpl<MachineBasicBlock * > &Exits)const
 Insert explicit copies in entry and exit blocks.
 
virtualSDValue getNegatedExpression (SDValueOp,SelectionDAG &DAG,bool LegalOps,bool OptForSize,NegatibleCost &Cost,unsignedDepth=0)const
 Return the newly negated expression if the cost is not expensive and set the cost inCost to indicate that if it is cheaper or neutral to do the negation.
 
SDValue getCheaperOrNeutralNegatedExpression (SDValueOp,SelectionDAG &DAG,bool LegalOps,bool OptForSize,constNegatibleCostCostThreshold=NegatibleCost::Neutral,unsignedDepth=0)const
 
SDValue getCheaperNegatedExpression (SDValueOp,SelectionDAG &DAG,bool LegalOps,bool OptForSize,unsignedDepth=0)const
 This is the helper function to return the newly negated expression only when the cost is cheaper.
 
SDValue getNegatedExpression (SDValueOp,SelectionDAG &DAG,bool LegalOps,bool OptForSize,unsignedDepth=0)const
 This is the helper function to return the newly negated expression if the cost is not expensive.
 
virtualbool splitValueIntoRegisterParts (SelectionDAG &DAG,constSDLoc &DL,SDValue Val,SDValue *Parts,unsigned NumParts,MVT PartVT, std::optional<CallingConv::ID >CC)const
 Target-specific splitting of values into parts that fit a register storing a legal type.
 
virtualbool checkForPhysRegDependency (SDNode *Def,SDNode *User,unsignedOp,constTargetRegisterInfo *TRI,constTargetInstrInfo *TII,unsigned &PhysReg, int &Cost)const
 Allows the target to handle physreg-carried dependency in target-specific way.
 
virtualSDValue joinRegisterPartsIntoValue (SelectionDAG &DAG,constSDLoc &DL,constSDValue *Parts,unsigned NumParts,MVT PartVT,EVT ValueVT, std::optional<CallingConv::ID >CC)const
 Target-specific combining of register parts into its original value.
 
virtualSDValue LowerFormalArguments (SDValue,CallingConv::ID,bool,constSmallVectorImpl<ISD::InputArg > &,constSDLoc &,SelectionDAG &,SmallVectorImpl<SDValue > &)const
 This hook must be implemented to lower the incoming (formal) arguments, described by the Ins array, into the specified DAG.
 
std::pair<SDValue,SDValueLowerCallTo (CallLoweringInfo &CLI)const
 This function lowers an abstract call to a function into an actual call.
 
virtualSDValue LowerCall (CallLoweringInfo &,SmallVectorImpl<SDValue > &)const
 This hook must be implemented to lower calls into the specified DAG.
 
virtual void HandleByVal (CCState *,unsigned &,Align)const
 Target-specific cleanup for formal ByVal parameters.
 
virtualbool CanLowerReturn (CallingConv::ID,MachineFunction &,bool,constSmallVectorImpl<ISD::OutputArg > &,LLVMContext &,constType *RetTy)const
 This hook should be implemented to check whether the return values described by the Outs array can fit into the return registers.
 
virtualSDValue LowerReturn (SDValue,CallingConv::ID,bool,constSmallVectorImpl<ISD::OutputArg > &,constSmallVectorImpl<SDValue > &,constSDLoc &,SelectionDAG &)const
 This hook must be implemented to lower outgoing return values, described by the Outs array, into the specified DAG.
 
virtualbool isUsedByReturnOnly (SDNode *,SDValue &)const
 Return true if result of the specified node is used by a return node only.
 
virtualbool mayBeEmittedAsTailCall (constCallInst *)const
 Return true if the target may be able emit the call instruction as a tail call.
 
virtualRegister getRegisterByName (constchar *RegName,LLT Ty,constMachineFunction &MF)const
 Return the register ID of the name passed in.
 
virtualEVT getTypeForExtReturn (LLVMContext &Context,EVT VT,ISD::NodeType)const
 Return the type that should be used to zero or sign extend a zeroext/signext integer return value.
 
virtualbool functionArgumentNeedsConsecutiveRegisters (Type *Ty,CallingConv::ID CallConv,bool isVarArg,constDataLayout &DL)const
 For some targets, an LLVM struct type must be broken down into multiple simple types, but the calling convention specifies that the entire struct must be passed in a block of consecutive registers.
 
virtualbool shouldSplitFunctionArgumentsAsLittleEndian (constDataLayout &DL)const
 For most targets, an LLVM type must be broken down into multiple smaller types.
 
virtualconstMCPhysReggetScratchRegisters (CallingConv::IDCC)const
 Returns a 0 terminated array of registers that can be safely used as scratch registers.
 
virtualArrayRef<MCPhysReggetRoundingControlRegisters ()const
 Returns a 0 terminated array of rounding control registers that can be attached into strict FP call.
 
virtualSDValue prepareVolatileOrAtomicLoad (SDValue Chain,constSDLoc &DL,SelectionDAG &DAG)const
 This callback is used to prepare for a volatile or atomic load.
 
virtual void LowerOperationWrapper (SDNode *N,SmallVectorImpl<SDValue > &Results,SelectionDAG &DAG)const
 This callback is invoked by the type legalizer to legalize nodes with an illegal operand type but legal result types.
 
virtualSDValue LowerOperation (SDValueOp,SelectionDAG &DAG)const
 This callback is invoked for operations that are unsupported by the target, which are registered to use 'custom' lowering, and whose defined values are all legal.
 
virtual void ReplaceNodeResults (SDNode *,SmallVectorImpl<SDValue > &,SelectionDAG &)const
 This callback is invoked when a node result type is illegal for the target, and the operation was registered to use 'custom' lowering for that result type.
 
virtualconstchargetTargetNodeName (unsigned Opcode)const
 This method returns the name of a target specific DAG node.
 
virtualFastISelcreateFastISel (FunctionLoweringInfo &,constTargetLibraryInfo *)const
 This method returns a target specificFastISel object, or null if the target does not support "fast" ISel.
 
bool verifyReturnAddressArgumentIsConstant (SDValueOp,SelectionDAG &DAG)const
 
virtual void verifyTargetSDNode (constSDNode *N)const
 Check the givenSDNode. Aborts if it is invalid.
 
virtualbool ExpandInlineAsm (CallInst *)const
 This hook allows the target to expand an inline asm call to be explicit llvm code if it wants to.
 
virtualAsmOperandInfoVector ParseConstraints (constDataLayout &DL,constTargetRegisterInfo *TRI,constCallBase &Call)const
 Split up the constraint string from the inline assembly value into the specific constraints and their prefixes, and also tie in the associated operand values.
 
virtualConstraintWeight getMultipleConstraintMatchWeight (AsmOperandInfo &info, int maIndex)const
 Examine constraint type and operand type and determine a weight value.
 
virtualConstraintWeight getSingleConstraintMatchWeight (AsmOperandInfo &info,constchar *constraint)const
 Examine constraint string and operand type and determine a weight value.
 
virtual void ComputeConstraintToUse (AsmOperandInfo &OpInfo,SDValueOp,SelectionDAG *DAG=nullptr)const
 Determines the constraint code and constraint type to use for the specificAsmOperandInfo, setting OpInfo.ConstraintCode and OpInfo.ConstraintType.
 
virtualConstraintType getConstraintType (StringRef Constraint)const
 Given a constraint, return the type of constraint it is for this target.
 
ConstraintGroup getConstraintPreferences (AsmOperandInfo &OpInfo)const
 Given an OpInfo with list of constraints codes as strings, return a sorted Vector of pairs of constraint codes and their types in priority of what we'd prefer to lower them as.
 
virtual std::pair<unsigned,constTargetRegisterClass * > getRegForInlineAsmConstraint (constTargetRegisterInfo *TRI,StringRef Constraint,MVT VT)const
 Given a physical register constraint (e.g.
 
virtualInlineAsm::ConstraintCode getInlineAsmMemConstraint (StringRef ConstraintCode)const
 
virtualconstcharLowerXConstraint (EVT ConstraintVT)const
 Try to replace an X constraint, which matches anything, with another that has more specific requirements based on the type of the corresponding operand.
 
virtual void LowerAsmOperandForConstraint (SDValueOp,StringRef Constraint, std::vector<SDValue > &Ops,SelectionDAG &DAG)const
 Lower the specified operand into the Ops vector.
 
virtualSDValue LowerAsmOutputForConstraint (SDValue &Chain,SDValue &Glue,constSDLoc &DL,constAsmOperandInfo &OpInfo,SelectionDAG &DAG)const
 
virtual void CollectTargetIntrinsicOperands (constCallInst &I,SmallVectorImpl<SDValue > &Ops,SelectionDAG &DAG)const
 
SDValue BuildSDIV (SDNode *N,SelectionDAG &DAG,bool IsAfterLegalization,bool IsAfterLegalTypes,SmallVectorImpl<SDNode * > &Created)const
 Given anISD::SDIV node expressing a divide by constant, return a DAG expression to select that will generate the same value by multiplying by a magic number.
 
SDValue BuildUDIV (SDNode *N,SelectionDAG &DAG,bool IsAfterLegalization,bool IsAfterLegalTypes,SmallVectorImpl<SDNode * > &Created)const
 Given anISD::UDIV node expressing a divide by constant, return a DAG expression to select that will generate the same value by multiplying by a magic number.
 
SDValue buildSDIVPow2WithCMov (SDNode *N,constAPInt &Divisor,SelectionDAG &DAG,SmallVectorImpl<SDNode * > &Created)const
 Build sdiv by power-of-2 with conditional move instructions Ref: "Hacker's Delight" by Henry Warren 10-1 If conditional move/branch is preferred, we lower sdiv x, +/-2**k into: bgez x, label add x, x, 2**k-1 label: sra res, x, k neg res, res (when the divisor is negative)
 
virtualSDValue BuildSDIVPow2 (SDNode *N,constAPInt &Divisor,SelectionDAG &DAG,SmallVectorImpl<SDNode * > &Created)const
 Targets may override this function to provide custom SDIV lowering for power-of-2 denominators.
 
virtualSDValue BuildSREMPow2 (SDNode *N,constAPInt &Divisor,SelectionDAG &DAG,SmallVectorImpl<SDNode * > &Created)const
 Targets may override this function to provide custom SREM lowering for power-of-2 denominators.
 
virtualunsigned combineRepeatedFPDivisors ()const
 Indicate whether this target prefers to combine FDIVs with the same divisor.
 
virtualSDValue getSqrtEstimate (SDValue Operand,SelectionDAG &DAG, intEnabled, int &RefinementSteps,bool &UseOneConstNR,bool Reciprocal)const
 Hooks for building estimates in place of slower divisions and square roots.
 
SDValue createSelectForFMINNUM_FMAXNUM (SDNode *Node,SelectionDAG &DAG)const
 Try to convert the fminnum/fmaxnum to a compare/select sequence.
 
virtualSDValue getRecipEstimate (SDValue Operand,SelectionDAG &DAG, intEnabled, int &RefinementSteps)const
 Return a reciprocal estimate value for the input operand.
 
virtualSDValue getSqrtInputTest (SDValue Operand,SelectionDAG &DAG,constDenormalMode &Mode)const
 Return a target-dependent comparison result if the input operand is suitable for use with a square root estimate calculation.
 
virtualSDValue getSqrtResultForDenormInput (SDValue Operand,SelectionDAG &DAG)const
 Return a target-dependent result if the input operand is not suitable for use with a square root estimate calculation.
 
bool expandMUL_LOHI (unsigned Opcode,EVT VT,constSDLoc &dl,SDValueLHS,SDValueRHS,SmallVectorImpl<SDValue > &Result,EVT HiLoVT,SelectionDAG &DAG,MulExpansionKind Kind,SDValue LL=SDValue(),SDValue LH=SDValue(),SDValue RL=SDValue(),SDValue RH=SDValue())const
 Expand a MUL or [US]MUL_LOHI of n-bit values into two or four nodes, respectively, each computing an n/2-bit part of the result.
 
bool expandMUL (SDNode *N,SDValue &Lo,SDValue &Hi,EVT HiLoVT,SelectionDAG &DAG,MulExpansionKind Kind,SDValue LL=SDValue(),SDValue LH=SDValue(),SDValue RL=SDValue(),SDValue RH=SDValue())const
 Expand a MUL into two nodes.
 
bool expandDIVREMByConstant (SDNode *N,SmallVectorImpl<SDValue > &Result,EVT HiLoVT,SelectionDAG &DAG,SDValue LL=SDValue(),SDValue LH=SDValue())const
 Attempt to expand an n-bit div/rem/divrem by constant using a n/2-bit urem by constant and other arithmetic ops.
 
SDValue expandFunnelShift (SDNode *N,SelectionDAG &DAG)const
 Expand funnel shift.
 
SDValue expandROT (SDNode *N,bool AllowVectorOps,SelectionDAG &DAG)const
 Expand rotations.
 
void expandShiftParts (SDNode *N,SDValue &Lo,SDValue &Hi,SelectionDAG &DAG)const
 Expand shift-by-parts.
 
bool expandFP_TO_SINT (SDNode *N,SDValue &Result,SelectionDAG &DAG)const
 Expand float(f32) to SINT(i64) conversion.
 
bool expandFP_TO_UINT (SDNode *N,SDValue &Result,SDValue &Chain,SelectionDAG &DAG)const
 Expand float to UINT conversion.
 
bool expandUINT_TO_FP (SDNode *N,SDValue &Result,SDValue &Chain,SelectionDAG &DAG)const
 Expand UINT(i64) to double(f64) conversion.
 
SDValue expandFMINNUM_FMAXNUM (SDNode *N,SelectionDAG &DAG)const
 Expand fminnum/fmaxnum into fminnum_ieee/fmaxnum_ieee with quieted inputs.
 
SDValue expandFMINIMUM_FMAXIMUM (SDNode *N,SelectionDAG &DAG)const
 Expand fminimum/fmaximum into multiple comparison with selects.
 
SDValue expandFMINIMUMNUM_FMAXIMUMNUM (SDNode *N,SelectionDAG &DAG)const
 Expand fminimumnum/fmaximumnum into multiple comparison with selects.
 
SDValue expandFP_TO_INT_SAT (SDNode *N,SelectionDAG &DAG)const
 Expand FP_TO_[US]INT_SAT into FP_TO_[US]INT and selects or min/max.
 
SDValue expandRoundInexactToOdd (EVT ResultVT,SDValueOp,constSDLoc &DL,SelectionDAG &DAG)const
 Truncate Op to ResultVT.
 
SDValue expandFP_ROUND (SDNode *Node,SelectionDAG &DAG)const
 Expand round(fp) to fp conversion.
 
SDValue expandIS_FPCLASS (EVT ResultVT,SDValueOp,FPClassTestTest,SDNodeFlags Flags,constSDLoc &DL,SelectionDAG &DAG)const
 Expand check for floating point class.
 
SDValue expandCTPOP (SDNode *N,SelectionDAG &DAG)const
 Expand CTPOP nodes.
 
SDValue expandVPCTPOP (SDNode *N,SelectionDAG &DAG)const
 Expand VP_CTPOP nodes.
 
SDValue expandCTLZ (SDNode *N,SelectionDAG &DAG)const
 Expand CTLZ/CTLZ_ZERO_UNDEF nodes.
 
SDValue expandVPCTLZ (SDNode *N,SelectionDAG &DAG)const
 Expand VP_CTLZ/VP_CTLZ_ZERO_UNDEF nodes.
 
SDValue CTTZTableLookup (SDNode *N,SelectionDAG &DAG,constSDLoc &DL,EVT VT,SDValueOp,unsigned NumBitsPerElt)const
 Expand CTTZ via Table Lookup.
 
SDValue expandCTTZ (SDNode *N,SelectionDAG &DAG)const
 Expand CTTZ/CTTZ_ZERO_UNDEF nodes.
 
SDValue expandVPCTTZ (SDNode *N,SelectionDAG &DAG)const
 Expand VP_CTTZ/VP_CTTZ_ZERO_UNDEF nodes.
 
SDValue expandVPCTTZElements (SDNode *N,SelectionDAG &DAG)const
 Expand VP_CTTZ_ELTS/VP_CTTZ_ELTS_ZERO_UNDEF nodes.
 
SDValue expandVectorFindLastActive (SDNode *N,SelectionDAG &DAG)const
 Expand VECTOR_FIND_LAST_ACTIVE nodes.
 
SDValue expandABS (SDNode *N,SelectionDAG &DAG,bool IsNegative=false)const
 Expand ABS nodes.
 
SDValue expandABD (SDNode *N,SelectionDAG &DAG)const
 Expand ABDS/ABDU nodes.
 
SDValue expandAVG (SDNode *N,SelectionDAG &DAG)const
 Expand vector/scalar AVGCEILS/AVGCEILU/AVGFLOORS/AVGFLOORU nodes.
 
SDValue expandBSWAP (SDNode *N,SelectionDAG &DAG)const
 Expand BSWAP nodes.
 
SDValue expandVPBSWAP (SDNode *N,SelectionDAG &DAG)const
 Expand VP_BSWAP nodes.
 
SDValue expandBITREVERSE (SDNode *N,SelectionDAG &DAG)const
 Expand BITREVERSE nodes.
 
SDValue expandVPBITREVERSE (SDNode *N,SelectionDAG &DAG)const
 Expand VP_BITREVERSE nodes.
 
std::pair<SDValue,SDValuescalarizeVectorLoad (LoadSDNode *LD,SelectionDAG &DAG)const
 Turn load of vector type into a load of the individual elements.
 
SDValue scalarizeVectorStore (StoreSDNode *ST,SelectionDAG &DAG)const
 
std::pair<SDValue,SDValueexpandUnalignedLoad (LoadSDNode *LD,SelectionDAG &DAG)const
 Expands an unaligned load to 2 half-size loads for an integer, and possibly more for vectors.
 
SDValue expandUnalignedStore (StoreSDNode *ST,SelectionDAG &DAG)const
 Expands an unaligned store to 2 half-size stores for integer values, and possibly more for vectors.
 
SDValue IncrementMemoryAddress (SDValueAddr,SDValue Mask,constSDLoc &DL,EVT DataVT,SelectionDAG &DAG,bool IsCompressedMemory)const
 Increments memory addressAddr according to the type of the valueDataVT that should be stored.
 
SDValue getVectorElementPointer (SelectionDAG &DAG,SDValue VecPtr,EVT VecVT,SDValueIndex)const
 Get a pointer to vector elementIdx located in memory for a vector of typeVecVT starting at a base address ofVecPtr.
 
SDValue getVectorSubVecPointer (SelectionDAG &DAG,SDValue VecPtr,EVT VecVT,EVT SubVecVT,SDValueIndex)const
 Get a pointer to a sub-vector of typeSubVecVT at indexIdx located in memory for a vector of typeVecVT starting at a base address ofVecPtr.
 
SDValue expandIntMINMAX (SDNode *Node,SelectionDAG &DAG)const
 Method for building the DAG expansion ofISD::[US][MIN|MAX].
 
SDValue expandAddSubSat (SDNode *Node,SelectionDAG &DAG)const
 Method for building the DAG expansion ofISD::[US][ADD|SUB]SAT.
 
SDValue expandCMP (SDNode *Node,SelectionDAG &DAG)const
 Method for building the DAG expansion ofISD::[US]CMP.
 
SDValue expandShlSat (SDNode *Node,SelectionDAG &DAG)const
 Method for building the DAG expansion ofISD::[US]SHLSAT.
 
SDValue expandFixedPointMul (SDNode *Node,SelectionDAG &DAG)const
 Method for building the DAG expansion ofISD::[U|S]MULFIX[SAT].
 
SDValue expandFixedPointDiv (unsigned Opcode,constSDLoc &dl,SDValueLHS,SDValueRHS,unsigned Scale,SelectionDAG &DAG)const
 Method for building the DAG expansion ofISD::[US]DIVFIX[SAT].
 
void expandUADDSUBO (SDNode *Node,SDValue &Result,SDValue &Overflow,SelectionDAG &DAG)const
 Method for building the DAG expansion of ISD::U(ADD|SUB)O.
 
void expandSADDSUBO (SDNode *Node,SDValue &Result,SDValue &Overflow,SelectionDAG &DAG)const
 Method for building the DAG expansion of ISD::S(ADD|SUB)O.
 
bool expandMULO (SDNode *Node,SDValue &Result,SDValue &Overflow,SelectionDAG &DAG)const
 Method for building the DAG expansion ofISD::[US]MULO.
 
void forceExpandMultiply (SelectionDAG &DAG,constSDLoc &dl,boolSigned,SDValue &Lo,SDValue &Hi,SDValueLHS,SDValueRHS,SDValue HiLHS=SDValue(),SDValue HiRHS=SDValue())const
 Calculate the product twice the width of LHS and RHS.
 
void forceExpandWideMUL (SelectionDAG &DAG,constSDLoc &dl,boolSigned,constSDValueLHS,constSDValueRHS,SDValue &Lo,SDValue &Hi)const
 Calculate full product of LHS and RHS either via a libcall or through brute force expansion of the multiplication.
 
SDValue expandVecReduce (SDNode *Node,SelectionDAG &DAG)const
 Expand a VECREDUCE_* into an explicit calculation.
 
SDValue expandVecReduceSeq (SDNode *Node,SelectionDAG &DAG)const
 Expand a VECREDUCE_SEQ_* into an explicit ordered calculation.
 
bool expandREM (SDNode *Node,SDValue &Result,SelectionDAG &DAG)const
 Expand an SREM or UREM using SDIV/UDIV or SDIVREM/UDIVREM, if legal.
 
SDValue expandVectorSplice (SDNode *Node,SelectionDAG &DAG)const
 Method for building the DAG expansion ofISD::VECTOR_SPLICE.
 
SDValue expandVECTOR_COMPRESS (SDNode *Node,SelectionDAG &DAG)const
 Expand a vector VECTOR_COMPRESS into a sequence of extract element, store temporarily, advance store position, before re-loading the final vector.
 
bool LegalizeSetCCCondCode (SelectionDAG &DAG,EVT VT,SDValue &LHS,SDValue &RHS,SDValue &CC,SDValue Mask,SDValue EVL,bool &NeedInvert,constSDLoc &dl,SDValue &Chain,bool IsSignaling=false)const
 Legalize a SETCC or VP_SETCC with given LHS and RHS and condition code CC on the current target.
 
virtualMachineBasicBlockEmitInstrWithCustomInserter (MachineInstr &MI,MachineBasicBlock *MBB)const
 This method should be implemented by targets that mark instructions with the 'usesCustomInserter' flag.
 
virtual void AdjustInstrPostInstrSelection (MachineInstr &MI,SDNode *Node)const
 This method should be implemented by targets that mark instructions with the 'hasPostISelHook' flag.
 
virtualbool useLoadStackGuardNode (constModule &M)const
 If this function returns true,SelectionDAGBuilder emits a LOAD_STACK_GUARD node when it is lowering Intrinsic::stackprotector.
 
virtualSDValue emitStackGuardXorFP (SelectionDAG &DAG,SDValue Val,constSDLoc &DL)const
 
virtualSDValue LowerToTLSEmulatedModel (constGlobalAddressSDNode *GA,SelectionDAG &DAG)const
 Lower TLS global addressSDNode for target independent emulated TLS model.
 
virtualSDValue expandIndirectJTBranch (constSDLoc &dl,SDValueValue,SDValueAddr, int JTI,SelectionDAG &DAG)const
 Expands target specific indirect branch for the case ofJumpTable expansion.
 
SDValue lowerCmpEqZeroToCtlzSrl (SDValueOp,SelectionDAG &DAG)const
 
virtualbool isXAndYEqZeroPreferableToXAndYEqY (ISD::CondCode,EVT)const
 
SDValue expandVectorNaryOpBySplitting (SDNode *Node,SelectionDAG &DAG)const
 
- Public Member Functions inherited fromllvm::TargetLoweringBase
virtual void markLibCallAttributes (MachineFunction *MF,unsignedCC,ArgListTy &Args)const
 
 TargetLoweringBase (constTargetMachine &TM)
 NOTE: TheTargetMachine owns TLOF.
 
 TargetLoweringBase (constTargetLoweringBase &)=delete
 
TargetLoweringBaseoperator= (constTargetLoweringBase &)=delete
 
virtual ~TargetLoweringBase ()=default
 
bool isStrictFPEnabled ()const
 Return true if the target support strict float operation.
 
constTargetMachinegetTargetMachine ()const
 
virtualbool useSoftFloat ()const
 
virtualMVT getPointerTy (constDataLayout &DL,uint32_t AS=0)const
 Return the pointer type for the given address space, defaults to the pointer type from the data layout.
 
virtualMVT getPointerMemTy (constDataLayout &DL,uint32_t AS=0)const
 Return the in-memory pointer type for the given address space, defaults to the pointer type from the data layout.
 
MVT getFrameIndexTy (constDataLayout &DL)const
 Return the type for frame index, which is determined by the alloca address space specified through the data layout.
 
MVT getProgramPointerTy (constDataLayout &DL)const
 Return the type for code pointers, which is determined by the program address space specified through the data layout.
 
virtualMVT getFenceOperandTy (constDataLayout &DL)const
 Return the type for operands of fence.
 
virtualMVT getScalarShiftAmountTy (constDataLayout &,EVT)const
 Return the type to use for a scalar shift opcode, given the shifted amount type.
 
EVT getShiftAmountTy (EVT LHSTy,constDataLayout &DL)const
 Returns the type for the shift amount of a shift opcode.
 
virtualLLVM_READONLYLLT getPreferredShiftAmountTy (LLT ShiftValueTy)const
 Return the preferred type to use for a shift opcode, given the shifted amount type isShiftValueTy.
 
virtualMVT getVectorIdxTy (constDataLayout &DL)const
 Returns the type to be used for the index operand of:ISD::INSERT_VECTOR_ELT,ISD::EXTRACT_VECTOR_ELT,ISD::INSERT_SUBVECTOR, andISD::EXTRACT_SUBVECTOR.
 
virtualMVT getVPExplicitVectorLengthTy ()const
 Returns the type to be used for the EVL/AVL operand of VP nodes: ISD::VP_ADD, ISD::VP_SUB, etc.
 
virtualMachineMemOperand::Flags getTargetMMOFlags (constInstruction &I)const
 This callback is used to inspect load/store instructions and add target-specificMachineMemOperand flags to them.
 
virtualMachineMemOperand::Flags getTargetMMOFlags (constMemSDNode &Node)const
 This callback is used to inspect load/storeSDNode.
 
MachineMemOperand::Flags getLoadMemOperandFlags (constLoadInst &LI,constDataLayout &DL,AssumptionCache *AC=nullptr,constTargetLibraryInfo *LibInfo=nullptr)const
 
MachineMemOperand::Flags getStoreMemOperandFlags (constStoreInst &SI,constDataLayout &DL)const
 
MachineMemOperand::Flags getAtomicMemOperandFlags (constInstruction &AI,constDataLayout &DL)const
 
virtualbool isSelectSupported (SelectSupportKind)const
 
virtualbool shouldExpandPartialReductionIntrinsic (constIntrinsicInst *I)const
 Return true if the @llvm.experimental.vector.partial.reduce.
 
virtualbool shouldExpandGetActiveLaneMask (EVT VT,EVT OpVT)const
 Return true if the @llvm.get.active.lane.mask intrinsic should be expanded using generic code inSelectionDAGBuilder.
 
virtualbool shouldExpandGetVectorLength (EVT CountVT,unsigned VF,bool IsScalable)const
 
virtualbool shouldExpandCttzElements (EVT VT)const
 Return true if the @llvm.experimental.cttz.elts intrinsic should be expanded using generic code inSelectionDAGBuilder.
 
unsigned getBitWidthForCttzElements (Type *RetTy,ElementCount EC,bool ZeroIsPoison,constConstantRange *VScaleRange)const
 Return the minimum number of bits required to hold the maximum possible number of trailing zero vector elements.
 
virtualbool shouldExpandVectorMatch (EVT VT,unsigned SearchSize)const
 Return true if the @llvm.experimental.vector.match intrinsic should be expanded for vector type ‘VT’ and search size ‘SearchSize’ using generic code inSelectionDAGBuilder.
 
virtualbool shouldReassociateReduction (unsigned RedOpc,EVT VT)const
 
virtualbool reduceSelectOfFPConstantLoads (EVT CmpOpVT)const
 Return true if it is profitable to convert a select of FP constants into a constant pool load whose address depends on the select condition.
 
bool hasMultipleConditionRegisters ()const
 Return true if multiple condition registers are available.
 
bool hasExtractBitsInsn ()const
 Return true if the target has BitExtract instructions.
 
virtualTargetLoweringBase::LegalizeTypeAction getPreferredVectorAction (MVT VT)const
 Return the preferred vector type legalization action.
 
virtualbool softPromoteHalfType ()const
 
virtualbool useFPRegsForHalfType ()const
 
virtualbool shouldExpandBuildVectorWithShuffles (EVT,unsigned DefinedValues)const
 
virtualbool isIntDivCheap (EVT VT,AttributeList Attr)const
 Return true if integer divide is usually cheaper than a sequence of several shifts, adds, and multiplies for this target.
 
virtualbool hasStandaloneRem (EVT VT)const
 Return true if the target can handle a standalone remainder operation.
 
virtualbool isFsqrtCheap (SDValueX,SelectionDAG &DAG)const
 Return true if SQRT(X) shouldn't be replaced with X*RSQRT(X).
 
int getRecipEstimateSqrtEnabled (EVT VT,MachineFunction &MF)const
 Return a ReciprocalEstimate enum value for a square root of the given type based on the function's attributes.
 
int getRecipEstimateDivEnabled (EVT VT,MachineFunction &MF)const
 Return a ReciprocalEstimate enum value for a division of the given type based on the function's attributes.
 
int getSqrtRefinementSteps (EVT VT,MachineFunction &MF)const
 Return the refinement step count for a square root of the given type based on the function's attributes.
 
int getDivRefinementSteps (EVT VT,MachineFunction &MF)const
 Return the refinement step count for a division of the given type based on the function's attributes.
 
bool isSlowDivBypassed ()const
 Returns true if target has indicated at least one type should be bypassed.
 
constDenseMap<unsigned int,unsigned int > & getBypassSlowDivWidths ()const
 Returns map of slow types for division or remainder with corresponding fast types.
 
virtualbool isVScaleKnownToBeAPowerOfTwo ()const
 Return true only if vscale must be a power of two.
 
bool isJumpExpensive ()const
 Return true if Flow Control is an expensive operation that should be avoided.
 
virtualCondMergingParams getJumpConditionMergingParams (Instruction::BinaryOps,constValue *,constValue *)const
 
bool isPredictableSelectExpensive ()const
 Return true if selects are only cheaper than branches if the branch is unlikely to be predicted right.
 
virtualbool fallBackToDAGISel (constInstruction &Inst)const
 
virtualbool isLoadBitCastBeneficial (EVT LoadVT,EVT BitcastVT,constSelectionDAG &DAG,constMachineMemOperand &MMO)const
 Return true if the following transform is beneficial: fold (conv (load x)) -> (load (conv*)x) On architectures that don't natively support some vector loads efficiently, casting the load to a smaller vector of larger types and loading is more efficient, however, this can be undone by optimizations in dag combiner.
 
virtualbool isStoreBitCastBeneficial (EVT StoreVT,EVT BitcastVT,constSelectionDAG &DAG,constMachineMemOperand &MMO)const
 Return true if the following transform is beneficial: (store (y (conv x)), y*)) -> (store x, (x*))
 
virtualbool storeOfVectorConstantIsCheap (bool IsZero,EVT MemVT,unsigned NumElem,unsigned AddrSpace)const
 Return true if it is expected to be cheaper to do a store of vector constant with the given size and type for the address space than to store the individual scalar element constants.
 
virtualbool mergeStoresAfterLegalization (EVT MemVT)const
 Allow store merging for the specified type after legalization in addition to before legalization.
 
virtualbool canMergeStoresTo (unsigned AS,EVT MemVT,constMachineFunction &MF)const
 Returns if it's reasonable to merge stores to MemVT size.
 
virtualbool isCheapToSpeculateCttz (Type *Ty)const
 Return true if it is cheap to speculate a call to intrinsic cttz.
 
virtualbool isCheapToSpeculateCtlz (Type *Ty)const
 Return true if it is cheap to speculate a call to intrinsic ctlz.
 
virtualbool isCtlzFast ()const
 Return true if ctlz instruction is fast.
 
virtualbool isCtpopFast (EVT VT)const
 Return true if ctpop instruction is fast.
 
virtualunsigned getCustomCtpopCost (EVT VT,ISD::CondCodeCond)const
 Return the maximum number of "x & (x - 1)" operations that can be done instead of deferring to a custom CTPOP.
 
virtualbool isEqualityCmpFoldedWithSignedCmp ()const
 Return true if instruction generated for equality comparison is folded with instruction generated for signed comparison.
 
virtualbool preferZeroCompareBranch ()const
 Return true if the heuristic to prefer icmp eq zero should be used in code gen prepare.
 
virtualbool isMultiStoresCheaperThanBitsMerge (EVT LTy,EVT HTy)const
 Return true if it is cheaper to split the store of a merged int val from a pair of smaller values into multiple stores.
 
virtualbool isMaskAndCmp0FoldingBeneficial (constInstruction &AndI)const
 Return if the target supports combining a chain like:
 
virtualbool areTwoSDNodeTargetMMOFlagsMergeable (constMemSDNode &NodeX,constMemSDNode &NodeY)const
 Return true if it is valid to merge the TargetMMOFlags in two SDNodes.
 
virtualbool convertSetCCLogicToBitwiseLogic (EVT VT)const
 Use bitwise logic to make pairs of compares more efficient.
 
virtualMVT hasFastEqualityCompare (unsigned NumBits)const
 Return the preferred operand type if the target has a quick way to compare integer values of the given size.
 
virtualbool hasAndNotCompare (SDValueY)const
 Return true if the target should transform: (X & Y) == Y —> (~X & Y) == 0 (X & Y) != Y —> (~X & Y) != 0.
 
virtualbool hasAndNot (SDValueX)const
 Return true if the target has a bitwise and-not operation: X = ~A & B This can be used to simplify select or other instructions.
 
virtualbool hasBitTest (SDValueX,SDValueY)const
 Return true if the target has a bit-test instruction: (X & (1 << Y)) ==/!= 0 This knowledge can be used to prevent breaking the pattern, or creating it if it could be recognized.
 
virtualbool shouldFoldMaskToVariableShiftPair (SDValueX)const
 There are two ways to clear extreme bits (either low or high): Mask: x & (-1 << y) (the instcombine canonical form) Shifts: x >> y << y Return true if the variant with 2 variable shifts is preferred.
 
virtualbool shouldFoldConstantShiftPairToMask (constSDNode *N,CombineLevel Level)const
 Return true if it is profitable to fold a pair of shifts into a mask.
 
virtualbool shouldTransformSignedTruncationCheck (EVT XVT,unsigned KeptBits)const
 Should we tranform the IR-optimal check for whether given truncation down into KeptBits would be truncating or not: (add x, (1 << (KeptBits-1))) srccond (1 << KeptBits) Into it's more traditional form: ((x << C) a>> C) dstcond x Return true if we should transform.
 
virtualbool shouldProduceAndByConstByHoistingConstFromShiftsLHSOfAnd (SDValueX,ConstantSDNode *XC,ConstantSDNode *CC,SDValueY,unsigned OldShiftOpcode,unsigned NewShiftOpcode,SelectionDAG &DAG)const
 Given the pattern (X & (C l>>/<< Y)) ==/!= 0 return true if it should be transformed into: ((X <</l>> Y) & C) ==/!= 0 WARNING: if 'X' is a constant, the fold may deadlock! FIXME: we could avoid passing XC, but we can't useisConstOrConstSplat() here because it can end up being not linked in.
 
virtualbool optimizeFMulOrFDivAsShiftAddBitcast (SDNode *N,SDValue FPConst,SDValue IntPow2)const
 
virtualunsigned preferedOpcodeForCmpEqPiecesOfOperand (EVT VT,unsigned ShiftOpc,bool MayTransformRotate,constAPInt &ShiftOrRotateAmt,const std::optional<APInt > &AndMask)const
 
virtualbool preferIncOfAddToSubOfNot (EVT VT)const
 These two forms are equivalent: sub y, (xor x, -1) add (add x, 1), y The variant with two add's is IR-canonical.
 
virtualbool preferABDSToABSWithNSW (EVT VT)const
 
virtualbool preferScalarizeSplat (SDNode *N)const
 
virtualbool preferSextInRegOfTruncate (EVT TruncVT,EVT VT,EVT ExtVT)const
 
bool enableExtLdPromotion ()const
 Return true if the target wants to use the optimization that turns ext(promotableInst1(...(promotableInstN(load)))) into promotedInst1(...(promotedInstN(ext(load)))).
 
virtualbool canCombineStoreAndExtract (Type *VectorTy,Value *Idx,unsigned &Cost)const
 Return true if the target can combine store(extractelement VectorTy,Idx).
 
virtualbool shallExtractConstSplatVectorElementToStore (Type *VectorTy,unsigned ElemSizeInBits,unsigned &Index)const
 Return true if the target shall perform extract vector element and store given that the vector is known to be splat of constant.
 
virtualbool shouldSplatInsEltVarIndex (EVT)const
 Return true if inserting a scalar into a variable element of an undef vector is more efficiently handled by splatting the scalar instead.
 
virtualbool enableAggressiveFMAFusion (EVT VT)const
 Return true if target always benefits from combining into FMA for a given value type.
 
virtualbool enableAggressiveFMAFusion (LLT Ty)const
 Return true if target always benefits from combining into FMA for a given value type.
 
virtualEVT getSetCCResultType (constDataLayout &DL,LLVMContext &Context,EVT VT)const
 Return the ValueType of the result of SETCC operations.
 
virtualMVT::SimpleValueType getCmpLibcallReturnType ()const
 Return the ValueType for comparison libcalls.
 
BooleanContent getBooleanContents (bool isVec,bool isFloat)const
 For targets without i1 registers, this gives the nature of the high-bits of boolean values held in types wider than i1.
 
BooleanContent getBooleanContents (EVTType)const
 
SDValue promoteTargetBoolean (SelectionDAG &DAG,SDValueBool,EVT ValVT)const
 Promote the given target boolean to a target boolean of the given type.
 
Sched::Preference getSchedulingPreference ()const
 Return target scheduling preference.
 
virtualSched::Preference getSchedulingPreference (SDNode *)const
 Some scheduler, e.g.
 
virtualconstTargetRegisterClassgetRegClassFor (MVT VT,bool isDivergent=false)const
 Return the register class that should be used for the specified value type.
 
virtualbool requiresUniformRegister (MachineFunction &MF,constValue *)const
 Allows target to decide about the register class of the specific value that is live outside the defining block.
 
virtualconstTargetRegisterClassgetRepRegClassFor (MVT VT)const
 Return the 'representative' register class for the specified value type.
 
virtualuint8_t getRepRegClassCostFor (MVT VT)const
 Return the cost of the 'representative' register class for the specified value type.
 
virtualShiftLegalizationStrategy preferredShiftLegalizationStrategy (SelectionDAG &DAG,SDNode *N,unsigned ExpansionFactor)const
 
bool isTypeLegal (EVT VT)const
 Return true if the target has native support for the specified value type.
 
constValueTypeActionImplgetValueTypeActions ()const
 
LegalizeKind getTypeConversion (LLVMContext &Context,EVT VT)const
 Return pair that represents the legalization kind (first) that needs to happen toEVT (second) in order to type-legalize it.
 
LegalizeTypeAction getTypeAction (LLVMContext &Context,EVT VT)const
 Return how we should legalize values of this type, either it is already legal (return 'Legal') or we need to promote it to a larger type (return 'Promote'), or we need to expand it into multiple registers of smaller integer type (return 'Expand').
 
LegalizeTypeAction getTypeAction (MVT VT)const
 
virtualEVT getTypeToTransformTo (LLVMContext &Context,EVT VT)const
 For types supported by the target, this is an identity function.
 
EVT getTypeToExpandTo (LLVMContext &Context,EVT VT)const
 For types supported by the target, this is an identity function.
 
unsigned getVectorTypeBreakdown (LLVMContext &Context,EVT VT,EVT &IntermediateVT,unsigned &NumIntermediates,MVT &RegisterVT)const
 Vector types are broken down into some number of legal first class types.
 
virtualunsigned getVectorTypeBreakdownForCallingConv (LLVMContext &Context,CallingConv::IDCC,EVT VT,EVT &IntermediateVT,unsigned &NumIntermediates,MVT &RegisterVT)const
 Certain targets such as MIPS require that some types such as vectors are always broken down into scalars in some contexts.
 
virtualbool getTgtMemIntrinsic (IntrinsicInfo &,constCallInst &,MachineFunction &,unsigned)const
 Given an intrinsic, checks if on the target the intrinsic will need to map to a MemIntrinsicNode (touches memory).
 
virtualbool isFPImmLegal (constAPFloat &,EVT,bool ForCodeSize=false)const
 Returns true if the target can instruction select the specified FP immediate natively.
 
virtualbool isShuffleMaskLegal (ArrayRef< int >,EVT)const
 Targets can use this to indicate that they only supportsome VECTOR_SHUFFLE operations, those with specific masks.
 
virtualbool canOpTrap (unsignedOp,EVT VT)const
 Returns true if the operation can trap for the value type.
 
virtualbool isVectorClearMaskLegal (ArrayRef< int >,EVT)const
 Similar to isShuffleMaskLegal.
 
virtualLegalizeAction getCustomOperationAction (SDNode &Op)const
 How to legalize this custom operation?
 
LegalizeAction getOperationAction (unsignedOp,EVT VT)const
 Return how this operation should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
virtualbool isSupportedFixedPointOperation (unsignedOp,EVT VT,unsigned Scale)const
 Custom method defined by each target to indicate if an operation which may require a scale is supported natively by the target.
 
LegalizeAction getFixedPointOperationAction (unsignedOp,EVT VT,unsigned Scale)const
 Some fixed point operations may be natively supported by the target but only for specific scales.
 
LegalizeAction getStrictFPOperationAction (unsignedOp,EVT VT)const
 
bool isOperationLegalOrCustom (unsignedOp,EVT VT,bool LegalOnly=false)const
 Return true if the specified operation is legal on this target or can be made legal with custom lowering.
 
bool isOperationLegalOrPromote (unsignedOp,EVT VT,bool LegalOnly=false)const
 Return true if the specified operation is legal on this target or can be made legal using promotion.
 
bool isOperationLegalOrCustomOrPromote (unsignedOp,EVT VT,bool LegalOnly=false)const
 Return true if the specified operation is legal on this target or can be made legal with custom lowering or using promotion.
 
bool isOperationCustom (unsignedOp,EVT VT)const
 Return true if the operation uses custom lowering, regardless of whether the type is legal or not.
 
virtualbool areJTsAllowed (constFunction *Fn)const
 Return true if lowering to a jump table is allowed.
 
bool rangeFitsInWord (constAPInt &Low,constAPInt &High,constDataLayout &DL)const
 Check whether the range [Low,High] fits in a machine word.
 
virtualbool isSuitableForJumpTable (constSwitchInst *SI,uint64_t NumCases,uint64_tRange,ProfileSummaryInfo *PSI,BlockFrequencyInfo *BFI)const
 Return true if lowering to a jump table is suitable for a set of case clusters which may containNumCases cases,Range range of values.
 
virtualMVT getPreferredSwitchConditionType (LLVMContext &Context,EVT ConditionVT)const
 Returns preferred type for switch condition.
 
bool isSuitableForBitTests (unsigned NumDests,unsigned NumCmps,constAPInt &Low,constAPInt &High,constDataLayout &DL)const
 Return true if lowering to a bit test is suitable for a set of case clusters which containsNumDests unique destinations,Low andHigh as its lowest and highest case values, and expectsNumCmps case value comparisons.
 
bool isOperationExpand (unsignedOp,EVT VT)const
 Return true if the specified operation is illegal on this target or unlikely to be made legal with custom lowering.
 
bool isOperationLegal (unsignedOp,EVT VT)const
 Return true if the specified operation is legal on this target.
 
LegalizeAction getLoadExtAction (unsigned ExtType,EVT ValVT,EVT MemVT)const
 Return how this load with extension should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isLoadExtLegal (unsigned ExtType,EVT ValVT,EVT MemVT)const
 Return true if the specified load with extension is legal on this target.
 
bool isLoadExtLegalOrCustom (unsigned ExtType,EVT ValVT,EVT MemVT)const
 Return true if the specified load with extension is legal or custom on this target.
 
LegalizeAction getAtomicLoadExtAction (unsigned ExtType,EVT ValVT,EVT MemVT)const
 Same as getLoadExtAction, but for atomic loads.
 
bool isAtomicLoadExtLegal (unsigned ExtType,EVT ValVT,EVT MemVT)const
 Return true if the specified atomic load with extension is legal on this target.
 
LegalizeAction getTruncStoreAction (EVT ValVT,EVT MemVT)const
 Return how this store with truncation should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isTruncStoreLegal (EVT ValVT,EVT MemVT)const
 Return true if the specified store with truncation is legal on this target.
 
bool isTruncStoreLegalOrCustom (EVT ValVT,EVT MemVT)const
 Return true if the specified store with truncation has solution on this target.
 
virtualbool canCombineTruncStore (EVT ValVT,EVT MemVT,bool LegalOnly)const
 
LegalizeAction getIndexedLoadAction (unsigned IdxMode,MVT VT)const
 Return how the indexed load should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isIndexedLoadLegal (unsigned IdxMode,EVT VT)const
 Return true if the specified indexed load is legal on this target.
 
LegalizeAction getIndexedStoreAction (unsigned IdxMode,MVT VT)const
 Return how the indexed store should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isIndexedStoreLegal (unsigned IdxMode,EVT VT)const
 Return true if the specified indexed load is legal on this target.
 
LegalizeAction getIndexedMaskedLoadAction (unsigned IdxMode,MVT VT)const
 Return how the indexed load should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isIndexedMaskedLoadLegal (unsigned IdxMode,EVT VT)const
 Return true if the specified indexed load is legal on this target.
 
LegalizeAction getIndexedMaskedStoreAction (unsigned IdxMode,MVT VT)const
 Return how the indexed store should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isIndexedMaskedStoreLegal (unsigned IdxMode,EVT VT)const
 Return true if the specified indexed load is legal on this target.
 
virtualbool shouldExtendGSIndex (EVT VT,EVT &EltTy)const
 Returns true if the index type for a masked gather/scatter requires extending.
 
virtualbool shouldRemoveExtendFromGSIndex (SDValue Extend,EVT DataVT)const
 
virtualbool isLegalScaleForGatherScatter (uint64_t Scale,uint64_t ElemSize)const
 
LegalizeAction getCondCodeAction (ISD::CondCodeCC,MVT VT)const
 Return how the condition code should be treated: either it is legal, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isCondCodeLegal (ISD::CondCodeCC,MVT VT)const
 Return true if the specified condition code is legal for a comparison of the specified types on this target.
 
bool isCondCodeLegalOrCustom (ISD::CondCodeCC,MVT VT)const
 Return true if the specified condition code is legal or custom for a comparison of the specified types on this target.
 
MVT getTypeToPromoteTo (unsignedOp,MVT VT)const
 If the action for this operation is to promote, this method returns the ValueType to promote to.
 
virtualEVT getAsmOperandValueType (constDataLayout &DL,Type *Ty,bool AllowUnknown=false)const
 
EVT getValueType (constDataLayout &DL,Type *Ty,bool AllowUnknown=false)const
 Return theEVT corresponding to this LLVM type.
 
EVT getMemValueType (constDataLayout &DL,Type *Ty,bool AllowUnknown=false)const
 
MVT getSimpleValueType (constDataLayout &DL,Type *Ty,bool AllowUnknown=false)const
 Return theMVT corresponding to this LLVM type. See getValueType.
 
virtualAlign getByValTypeAlignment (Type *Ty,constDataLayout &DL)const
 Returns the desired alignment for ByVal or InAlloca aggregate function arguments in the caller parameter area.
 
MVT getRegisterType (MVT VT)const
 Return the type of registers that this ValueType will eventually require.
 
MVT getRegisterType (LLVMContext &Context,EVT VT)const
 Return the type of registers that this ValueType will eventually require.
 
virtualunsigned getNumRegisters (LLVMContext &Context,EVT VT, std::optional<MVT > RegisterVT=std::nullopt)const
 Return the number of registers that this ValueType will eventually require.
 
virtualMVT getRegisterTypeForCallingConv (LLVMContext &Context,CallingConv::IDCC,EVT VT)const
 Certain combinations of ABIs, Targets and features require that types are legal for some operations and not for other operations.
 
virtualunsigned getNumRegistersForCallingConv (LLVMContext &Context,CallingConv::IDCC,EVT VT)const
 Certain targets require unusual breakdowns of certain types.
 
virtualAlign getABIAlignmentForCallingConv (Type *ArgTy,constDataLayout &DL)const
 Certain targets have context sensitive alignment requirements, where one type has the alignment requirement of another type.
 
virtualbool ShouldShrinkFPConstant (EVT)const
 If true, then instruction selection should seek to shrink the FP constant of the specified type to a smaller type in order to save space and / or reduce runtime.
 
virtualbool shouldReduceLoadWidth (SDNode *Load,ISD::LoadExtType ExtTy,EVT NewVT)const
 Return true if it is profitable to reduce a load to a smaller type.
 
virtualbool shouldRemoveRedundantExtend (SDValueOp)const
 Return true (the default) if it is profitable to remove a sext_inreg(x) where the sext is redundant, and use x directly.
 
bool isPaddedAtMostSignificantBitsWhenStored (EVT VT)const
 Indicates if any padding is guaranteed to go at the most significant bits when storing the type to memory and the type size isn't equal to the store size.
 
bool hasBigEndianPartOrdering (EVT VT,constDataLayout &DL)const
 When splitting a value of the specified type into parts, does the Lo or Hi part come first? This usually follows the endianness, except for ppcf128, where the Hi part always comes first.
 
bool hasTargetDAGCombine (ISD::NodeType NT)const
 If true, the target has custom DAG combine transformations that it can perform for the specified node.
 
unsigned getGatherAllAliasesMaxDepth ()const
 
virtualunsigned getVaListSizeInBits (constDataLayout &DL)const
 Returns the size of the platform's va_list object.
 
unsigned getMaxStoresPerMemset (bool OptSize)const
 Get maximum # of store operations permitted for llvm.memset.
 
unsigned getMaxStoresPerMemcpy (bool OptSize)const
 Get maximum # of store operations permitted for llvm.memcpy.
 
virtualunsigned getMaxGluedStoresPerMemcpy ()const
 Get maximum # of store operations to be glued together.
 
unsigned getMaxExpandSizeMemcmp (bool OptSize)const
 Get maximum # of load operations permitted for memcmp.
 
unsigned getMaxStoresPerMemmove (bool OptSize)const
 Get maximum # of store operations permitted for llvm.memmove.
 
virtualbool allowsMisalignedMemoryAccesses (EVT,unsigned AddrSpace=0,Align Alignment=Align(1),MachineMemOperand::Flags Flags=MachineMemOperand::MONone,unsigned *=nullptr)const
 Determine if the target supports unaligned memory accesses.
 
virtualbool allowsMisalignedMemoryAccesses (LLT,unsigned AddrSpace=0,Align Alignment=Align(1),MachineMemOperand::Flags Flags=MachineMemOperand::MONone,unsigned *=nullptr)const
 LLT handling variant.
 
bool allowsMemoryAccessForAlignment (LLVMContext &Context,constDataLayout &DL,EVT VT,unsigned AddrSpace=0,Align Alignment=Align(1),MachineMemOperand::Flags Flags=MachineMemOperand::MONone,unsigned *Fast=nullptr)const
 This function returns true if the memory access is aligned or if the target allows this specific unaligned memory access.
 
bool allowsMemoryAccessForAlignment (LLVMContext &Context,constDataLayout &DL,EVT VT,constMachineMemOperand &MMO,unsigned *Fast=nullptr)const
 Return true if the memory access of this type is aligned or if the target allows this specific unaligned access for the givenMachineMemOperand.
 
virtualbool allowsMemoryAccess (LLVMContext &Context,constDataLayout &DL,EVT VT,unsigned AddrSpace=0,Align Alignment=Align(1),MachineMemOperand::Flags Flags=MachineMemOperand::MONone,unsigned *Fast=nullptr)const
 Return true if the target supports a memory access of this type for the given address space and alignment.
 
bool allowsMemoryAccess (LLVMContext &Context,constDataLayout &DL,EVT VT,constMachineMemOperand &MMO,unsigned *Fast=nullptr)const
 Return true if the target supports a memory access of this type for the givenMachineMemOperand.
 
bool allowsMemoryAccess (LLVMContext &Context,constDataLayout &DL,LLT Ty,constMachineMemOperand &MMO,unsigned *Fast=nullptr)const
 LLT handling variant.
 
virtualEVT getOptimalMemOpType (constMemOp &Op,constAttributeList &)const
 Returns the target specific optimal type for load and store operations as a result of memset, memcpy, and memmove lowering.
 
virtualLLT getOptimalMemOpLLT (constMemOp &Op,constAttributeList &)const
 LLT returning variant.
 
virtualbool isSafeMemOpType (MVT)const
 Returns true if it's safe to use load / store of the specified type to expand memcpy / memset inline.
 
virtualunsigned getMinimumJumpTableEntries ()const
 Return lower limit for number of blocks in a jump table.
 
unsigned getMinimumJumpTableDensity (bool OptForSize)const
 Return lower limit of the density in a jump table.
 
unsigned getMaximumJumpTableSize ()const
 Return upper limit for number of entries in a jump table.
 
virtualbool isJumpTableRelative ()const
 
Register getStackPointerRegisterToSaveRestore ()const
 If a physical register, this specifies the register that llvm.savestack/llvm.restorestack should save and restore.
 
virtualRegister getExceptionPointerRegister (constConstant *PersonalityFn)const
 If a physical register, this returns the register that receives the exception address on entry to an EH pad.
 
virtualRegister getExceptionSelectorRegister (constConstant *PersonalityFn)const
 If a physical register, this returns the register that receives the exception typeid on entry to a landing pad.
 
virtualbool needsFixedCatchObjects ()const
 
Align getMinStackArgumentAlignment ()const
 Return the minimum stack alignment of an argument.
 
Align getMinFunctionAlignment ()const
 Return the minimum function alignment.
 
Align getPrefFunctionAlignment ()const
 Return the preferred function alignment.
 
virtualAlign getPrefLoopAlignment (MachineLoop *ML=nullptr)const
 Return the preferred loop alignment.
 
virtualunsigned getMaxPermittedBytesForAlignment (MachineBasicBlock *MBB)const
 Return the maximum amount of bytes allowed to be emitted when padding for alignment.
 
virtualbool alignLoopsWithOptSize ()const
 Should loops be aligned even when the function is marked OptSize (but not MinSize).
 
virtualValuegetIRStackGuard (IRBuilderBase &IRB)const
 If the target has a standard location for the stack protector guard, returns the address of that location.
 
virtual void insertSSPDeclarations (Module &M)const
 Inserts necessary declarations for SSP (stack protection) purpose.
 
virtualValuegetSDagStackGuard (constModule &M)const
 Return the variable that's previously inserted by insertSSPDeclarations, if any, otherwise return nullptr.
 
virtualbool useStackGuardXorFP ()const
 If this function returns true, stack protection checks should XOR the frame pointer (or whichever pointer is used to address locals) into the stack guard value before checking it.
 
virtualFunctiongetSSPStackGuardCheck (constModule &M)const
 If the target has a standard stack protection check function that performs validation and error handling, returns the function.
 
virtualValuegetSafeStackPointerLocation (IRBuilderBase &IRB)const
 Returns the target-specific address of the unsafe stack pointer.
 
virtualbool hasStackProbeSymbol (constMachineFunction &MF)const
 Returns the name of the symbol used to emit stack probes or the empty string if not applicable.
 
virtualbool hasInlineStackProbe (constMachineFunction &MF)const
 
virtualStringRef getStackProbeSymbolName (constMachineFunction &MF)const
 
virtualbool isFreeAddrSpaceCast (unsigned SrcAS,unsigned DestAS)const
 Returns true if a cast from SrcAS to DestAS is "cheap", such that e.g.
 
virtualbool shouldAlignPointerArgs (CallInst *,unsigned &,Align &)const
 Return true if the pointer arguments to CI should be aligned by aligning the object whose address is being passed.
 
virtual void emitAtomicCmpXchgNoStoreLLBalance (IRBuilderBase &Builder)const
 
virtualbool shouldSignExtendTypeInLibCall (Type *Ty,bool IsSigned)const
 Returns true if arguments should be sign-extended in lib calls.
 
virtualbool shouldExtendTypeInLibCall (EVTType)const
 Returns true if arguments should be extended in lib calls.
 
virtualAtomicExpansionKind shouldExpandAtomicLoadInIR (LoadInst *LI)const
 Returns how the given (atomic) load should be expanded by the IR-level AtomicExpand pass.
 
virtualAtomicExpansionKind shouldCastAtomicLoadInIR (LoadInst *LI)const
 Returns how the given (atomic) load should be cast by the IR-level AtomicExpand pass.
 
virtualAtomicExpansionKind shouldExpandAtomicStoreInIR (StoreInst *SI)const
 Returns how the given (atomic) store should be expanded by the IR-level AtomicExpand pass into.
 
virtualAtomicExpansionKind shouldCastAtomicStoreInIR (StoreInst *SI)const
 Returns how the given (atomic) store should be cast by the IR-level AtomicExpand pass into.
 
virtualAtomicExpansionKind shouldExpandAtomicCmpXchgInIR (AtomicCmpXchgInst *AI)const
 Returns how the given atomic cmpxchg should be expanded by the IR-level AtomicExpand pass.
 
virtualAtomicExpansionKind shouldExpandAtomicRMWInIR (AtomicRMWInst *RMW)const
 Returns how the IR-level AtomicExpand pass should expand the given AtomicRMW, if at all.
 
virtualAtomicExpansionKind shouldCastAtomicRMWIInIR (AtomicRMWInst *RMWI)const
 Returns how the given atomic atomicrmw should be cast by the IR-level AtomicExpand pass.
 
virtualLoadInstlowerIdempotentRMWIntoFencedLoad (AtomicRMWInst *RMWI)const
 On some platforms, an AtomicRMW that never actually modifies the value (such as fetch_add of 0) can be turned into a fence followed by an atomic load.
 
virtualISD::NodeType getExtendForAtomicOps ()const
 Returns how the platform's atomic operations are extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).
 
virtualISD::NodeType getExtendForAtomicCmpSwapArg ()const
 Returns how the platform's atomic compare and swap expects its comparison value to be extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).
 
virtualbool shouldNormalizeToSelectSequence (LLVMContext &Context,EVT VT)const
 Returns true if we should normalize select(N0&N1, X, Y) => select(N0, select(N1, X, Y), Y) and select(N0|N1, X, Y) => select(N0, select(N1, X, Y, Y)) if it is likely that it saves us from materializing N0 and N1 in an integer register.
 
virtualbool isProfitableToCombineMinNumMaxNum (EVT VT)const
 
virtualbool convertSelectOfConstantsToMath (EVT VT)const
 Return true if a select of constants (select Cond, C1, C2) should be transformed into simple math ops with the condition value.
 
virtualbool decomposeMulByConstant (LLVMContext &Context,EVT VT,SDValueC)const
 Return true if it is profitable to transform an integer multiplication-by-constant into simpler operations like shifts and adds.
 
virtualbool isMulAddWithConstProfitable (SDValue AddNode,SDValue ConstNode)const
 Return true if it may be profitable to transform (mul (add x, c1), c2) -> (add (mul x, c2), c1*c2).
 
virtualbool shouldUseStrictFP_TO_INT (EVT FpVT,EVT IntVT,bool IsSigned)const
 Return true if it is more correct/profitable to use strict FP_TO_INT conversion operations - canonicalizing the FP source value instead of converting all cases and then selecting based on value.
 
bool isBeneficialToExpandPowI (int64_tExponent,bool OptForSize)const
 Return true if it is beneficial to expand an @llvm.powi.
 
virtualbool getAddrModeArguments (constIntrinsicInst *,SmallVectorImpl<Value * > &,Type *&)const
 CodeGenPrepare sinks address calculations into the same BB as Load/Store instructions reading the address.
 
virtualbool isLegalAddressingMode (constDataLayout &DL,constAddrMode &AM,Type *Ty,unsigned AddrSpace,Instruction *I=nullptr)const
 Return true if the addressing mode represented by AM is legal for this target, for a load/store of the specified type.
 
virtualbool addressingModeSupportsTLS (constGlobalValue &)const
 Returns true if the targets addressing mode can target thread local storage (TLS).
 
virtual int64_t getPreferredLargeGEPBaseOffset (int64_t MinOffset, int64_t MaxOffset)const
 Return the prefered common base offset.
 
virtualbool isLegalICmpImmediate (int64_t)const
 Return true if the specified immediate is legal icmp immediate, that is the target has icmp instructions which can compare a register against the immediate without having to materialize the immediate into a register.
 
virtualbool isLegalAddImmediate (int64_t)const
 Return true if the specified immediate is legal add immediate, that is the target has add instructions which can add a register with the immediate without having to materialize the immediate into a register.
 
virtualbool isLegalAddScalableImmediate (int64_t)const
 Return true if adding the specified scalable immediate is legal, that is the target has add instructions which can add a register with the immediate (multiplied by vscale) without having to materialize the immediate into a register.
 
virtualbool isLegalStoreImmediate (int64_tValue)const
 Return true if the specified immediate is legal for the value input of a store instruction.
 
virtualTypeshouldConvertSplatType (ShuffleVectorInst *SVI)const
 Given a shuffle vector SVI representing a vector splat, return a new scalar type of size equal to SVI's scalar type if the new type is more profitable.
 
virtualbool shouldConvertPhiType (Type *From,Type *To)const
 Given a set in interconnected phis of type 'From' that are loaded/stored or bitcast to type 'To', return true if the set should be converted to 'To'.
 
virtualbool isCommutativeBinOp (unsigned Opcode)const
 Returns true if the opcode is a commutative binary operation.
 
virtualbool isBinOp (unsigned Opcode)const
 Return true if the node is a math/logic binary operator.
 
virtualbool isTruncateFree (Type *FromTy,Type *ToTy)const
 Return true if it's free to truncate a value of type FromTy to type ToTy.
 
virtualbool allowTruncateForTailCall (Type *FromTy,Type *ToTy)const
 Return true if a truncation from FromTy to ToTy is permitted when deciding whether a call is in tail position.
 
virtualbool isTruncateFree (EVT FromVT,EVT ToVT)const
 
virtualbool isTruncateFree (LLT FromTy,LLT ToTy,LLVMContext &Ctx)const
 
virtualbool isTruncateFree (SDValue Val,EVT VT2)const
 Return true if truncating the specific node Val to type VT2 is free.
 
virtualbool isProfitableToHoist (Instruction *I)const
 
bool isExtFree (constInstruction *I)const
 Return true if the extension represented byI is free.
 
bool isExtLoad (constLoadInst *Load,constInstruction *Ext,constDataLayout &DL)const
 Return true ifLoad andExt can form an ExtLoad.
 
virtualbool isZExtFree (Type *FromTy,Type *ToTy)const
 Return true if any actual instruction that defines a value of type FromTy implicitly zero-extends the value to ToTy in the result register.
 
virtualbool isZExtFree (EVT FromTy,EVT ToTy)const
 
virtualbool isZExtFree (LLT FromTy,LLT ToTy,LLVMContext &Ctx)const
 
virtualbool isZExtFree (SDValue Val,EVT VT2)const
 Return true if zero-extending the specific node Val to type VT2 is free (either because it's implicitly zero-extended such asARM ldrb / ldrh or because it's folded such asX86 zero-extending loads).
 
virtualbool isSExtCheaperThanZExt (EVT FromTy,EVT ToTy)const
 Return true if sign-extension from FromTy to ToTy is cheaper than zero-extension.
 
virtualbool signExtendConstant (constConstantInt *C)const
 Return true if this constant should be sign extended when promoting to a larger type.
 
virtualbool optimizeExtendOrTruncateConversion (Instruction *I,Loop *L,constTargetTransformInfo &TTI)const
 Try to optimize extending or truncating conversion instructions (like zext, trunc, fptoui, uitofp) for the target.
 
virtualbool hasPairedLoad (EVT,Align &)const
 Return true if the target supplies and combines to a paired load two loaded values of type LoadedType next to each other in memory.
 
virtualbool hasVectorBlend ()const
 Return true if the target has a vector blend instruction.
 
virtualunsigned getMaxSupportedInterleaveFactor ()const
 Get the maximum supported factor for interleaved memory accesses.
 
virtualbool lowerInterleavedLoad (LoadInst *LI,ArrayRef<ShuffleVectorInst * > Shuffles,ArrayRef<unsigned > Indices,unsigned Factor)const
 Lower an interleaved load to target specific intrinsics.
 
virtualbool lowerInterleavedStore (StoreInst *SI,ShuffleVectorInst *SVI,unsigned Factor)const
 Lower an interleaved store to target specific intrinsics.
 
virtualbool lowerDeinterleaveIntrinsicToLoad (LoadInst *LI,ArrayRef<Value * > DeinterleaveValues)const
 Lower a deinterleave intrinsic to a target specific load intrinsic.
 
virtualbool lowerInterleaveIntrinsicToStore (StoreInst *SI,ArrayRef<Value * > InterleaveValues)const
 Lower an interleave intrinsic to a target specific store intrinsic.
 
virtualbool isFPExtFree (EVT DestVT,EVT SrcVT)const
 Return true if an fpext operation is free (for instance, because single-precision floating-point numbers are implicitly extended to double-precision).
 
virtualbool isFPExtFoldable (constMachineInstr &MI,unsigned Opcode,LLT DestTy,LLT SrcTy)const
 Return true if an fpext operation input to anOpcode operation is free (for instance, because half-precision floating-point numbers are implicitly extended to float-precision) for an FMA instruction.
 
virtualbool isFPExtFoldable (constSelectionDAG &DAG,unsigned Opcode,EVT DestVT,EVT SrcVT)const
 Return true if an fpext operation input to anOpcode operation is free (for instance, because half-precision floating-point numbers are implicitly extended to float-precision) for an FMA instruction.
 
virtualbool isVectorLoadExtDesirable (SDValue ExtVal)const
 Return true if folding a vector load into ExtVal (a sign, zero, or any extend node) is profitable.
 
virtualbool isFNegFree (EVT VT)const
 Return true if an fneg operation is free to the point where it is never worthwhile to replace it with a bitwise operation.
 
virtualbool isFAbsFree (EVT VT)const
 Return true if an fabs operation is free to the point where it is never worthwhile to replace it with a bitwise operation.
 
virtualbool isFMAFasterThanFMulAndFAdd (constMachineFunction &MF,EVT)const
 Return true if an FMA operation is faster than a pair of fmul and fadd instructions.
 
virtualbool isFMAFasterThanFMulAndFAdd (constMachineFunction &MF,LLT)const
 Return true if an FMA operation is faster than a pair of fmul and fadd instructions.
 
virtualbool isFMAFasterThanFMulAndFAdd (constFunction &F,Type *)const
 IR version.
 
virtualbool isFMADLegal (constMachineInstr &MI,LLT Ty)const
 Returns true ifMI can be combined with another instruction to form TargetOpcode::G_FMAD.
 
virtualbool isFMADLegal (constSelectionDAG &DAG,constSDNode *N)const
 Returns true if be combined with to form anISD::FMAD.
 
virtualbool generateFMAsInMachineCombiner (EVT VT,CodeGenOptLevel OptLevel)const
 
virtualbool isNarrowingProfitable (SDNode *N,EVT SrcVT,EVT DestVT)const
 Return true if it's profitable to narrow operations of type SrcVT to DestVT.
 
virtualbool shouldFoldSelectWithIdentityConstant (unsigned BinOpcode,EVT VT)const
 Return true if pulling a binary operation into a select with an identity constant is profitable.
 
virtualbool shouldConvertConstantLoadToIntImm (constAPInt &Imm,Type *Ty)const
 Return true if it is beneficial to convert a load of a constant to just the constant itself.
 
virtualbool isExtractSubvectorCheap (EVT ResVT,EVT SrcVT,unsignedIndex)const
 Return true if EXTRACT_SUBVECTOR is cheap for extracting this result type from this source type with this index.
 
virtualbool shouldScalarizeBinop (SDValue VecOp)const
 Try to convert an extract element of a vector binary operation into an extract element followed by a scalar operation.
 
virtualbool isExtractVecEltCheap (EVT VT,unsignedIndex)const
 Return true if extraction of a scalar element from the given vector type at the given index is cheap.
 
virtualbool shouldFormOverflowOp (unsigned Opcode,EVT VT,bool MathUsed)const
 Try to convert math with an overflow comparison into the corresponding DAG node operation.
 
virtualbool aggressivelyPreferBuildVectorSources (EVT VecVT)const
 
virtualbool shouldConsiderGEPOffsetSplit ()const
 
virtualbool shouldAvoidTransformToShift (EVT VT,unsigned Amount)const
 Return true if creating a shift of the type by the given amount is not profitable.
 
virtualbool shouldFoldSelectWithSingleBitTest (EVT VT,constAPInt &AndMask)const
 
virtualbool shouldKeepZExtForFP16Conv ()const
 Does this target require the clearing of high-order bits in a register passed to the fp16 to fp conversion library function.
 
virtualbool shouldConvertFpToSat (unsignedOp,EVT FPVT,EVT VT)const
 Should we generate fp_to_si_sat and fp_to_ui_sat from type FPVT to type VT from min(max(fptoi)) saturation patterns.
 
virtualbool shouldExpandCmpUsingSelects (EVT VT)const
 Should we expand [US]CMP nodes using two selects and two compares, or by doing arithmetic on boolean types.
 
virtualbool isComplexDeinterleavingSupported ()const
 Does this target support complex deinterleaving.
 
virtualbool isComplexDeinterleavingOperationSupported (ComplexDeinterleavingOperationOperation,Type *Ty)const
 Does this target support complex deinterleaving with the given operation and type.
 
virtualValuecreateComplexDeinterleavingIR (IRBuilderBase &B,ComplexDeinterleavingOperation OperationType,ComplexDeinterleavingRotation Rotation,Value *InputA,Value *InputB,Value *Accumulator=nullptr)const
 Create the IR node for the given complex deinterleaving operation.
 
void setLibcallName (RTLIB::Libcall Call,constchar *Name)
 Rename the default libcall routine name for the specified libcall.
 
void setLibcallName (ArrayRef<RTLIB::Libcall > Calls,constchar *Name)
 
constchargetLibcallName (RTLIB::Libcall Call)const
 Get the libcall routine name for the specified libcall.
 
void setCmpLibcallCC (RTLIB::Libcall Call,ISD::CondCodeCC)
 Override the default CondCode to be used to test the result of the comparison libcall against zero.
 
ISD::CondCode getCmpLibcallCC (RTLIB::Libcall Call)const
 Get the CondCode that's to be used to test the result of the comparison libcall against zero.
 
void setLibcallCallingConv (RTLIB::Libcall Call,CallingConv::IDCC)
 Set theCallingConv that should be used for the specified libcall.
 
CallingConv::ID getLibcallCallingConv (RTLIB::Libcall Call)const
 Get theCallingConv that should be used for the specified libcall.
 
virtual void finalizeLowering (MachineFunction &MF)const
 Execute target specific actions to finalize target lowering.
 
virtualbool shouldLocalize (constMachineInstr &MI,constTargetTransformInfo *TTI)const
 Check whether or notMI needs to be moved close to its uses.
 
int InstructionOpcodeToISD (unsigned Opcode)const
 Get theISD node that corresponds to theInstruction class opcode.
 
int IntrinsicIDToISD (Intrinsic::IDID)const
 Get theISD node that corresponds to theIntrinsic ID.
 
unsigned getMaxAtomicSizeInBitsSupported ()const
 Returns the maximum atomic operation size (in bits) supported by the backend.
 
unsigned getMaxDivRemBitWidthSupported ()const
 Returns the size in bits of the maximum div/rem the backend supports.
 
unsigned getMaxLargeFPConvertBitWidthSupported ()const
 Returns the size in bits of the maximum larget fp convert the backend supports.
 
unsigned getMinCmpXchgSizeInBits ()const
 Returns the size of the smallest cmpxchg or ll/sc instruction the backend supports.
 
bool supportsUnalignedAtomics ()const
 Whether the target supports unaligned atomic operations.
 
virtualbool shouldInsertTrailingFenceForAtomicStore (constInstruction *I)const
 WhetherAtomicExpandPass should automatically insert a trailing fence without reducing the ordering for this atomic.
 
virtualValueemitLoadLinked (IRBuilderBase &Builder,Type *ValueTy,Value *Addr,AtomicOrdering Ord)const
 Perform a load-linked operation on Addr, returning a "Value *" with the corresponding pointee type.
 
virtualValueemitStoreConditional (IRBuilderBase &Builder,Value *Val,Value *Addr,AtomicOrdering Ord)const
 Perform a store-conditional operation to Addr.
 
virtual void emitExpandAtomicRMW (AtomicRMWInst *AI)const
 Perform a atomicrmw expansion using a target-specific way.
 
virtual void emitExpandAtomicCmpXchg (AtomicCmpXchgInst *CI)const
 Perform a cmpxchg expansion using a target-specific method.
 
virtual void emitBitTestAtomicRMWIntrinsic (AtomicRMWInst *AI)const
 Perform a bit test atomicrmw using a target-specific intrinsic.
 
virtual void emitCmpArithAtomicRMWIntrinsic (AtomicRMWInst *AI)const
 Perform a atomicrmw which the result is only used by comparison, using a target-specific intrinsic.
 

Static Public Member Functions

staticRISCVII::VLMUL getLMUL (MVT VT)
 
staticunsigned computeVLMAX (unsigned VectorBits,unsigned EltSize,unsigned MinSize)
 
static std::pair<unsigned,unsignedcomputeVLMAXBounds (MVT ContainerVT,constRISCVSubtarget &Subtarget)
 
staticunsigned getRegClassIDForLMUL (RISCVII::VLMUL LMul)
 
staticunsigned getSubregIndexByMVT (MVT VT,unsignedIndex)
 
staticunsigned getRegClassIDForVecVT (MVT VT)
 
static std::pair<unsigned,unsigneddecomposeSubvectorInsertExtractToSubRegs (MVT VecVT,MVT SubVecVT,unsigned InsertExtractIdx,constRISCVRegisterInfo *TRI)
 
- Static Public Member Functions inherited fromllvm::TargetLoweringBase
staticISD::NodeType getExtendForContent (BooleanContentContent)
 

Additional Inherited Members

- Public Types inherited fromllvm::TargetLowering
enum  ConstraintType {
  C_Register,C_RegisterClass,C_Memory,C_Address,
  C_Immediate,C_Other,C_Unknown
}
 
enum  ConstraintWeight {
  CW_Invalid = -1,CW_Okay = 0,CW_Good = 1,CW_Better = 2,
  CW_Best = 3,CW_SpecificReg = CW_Okay,CW_Register = CW_Good,CW_Memory = CW_Better,
  CW_Constant = CW_Best,CW_Default = CW_Okay
}
 
using AsmOperandInfoVector = std::vector<AsmOperandInfo >
 
using ConstraintPair = std::pair<StringRef,TargetLowering::ConstraintType >
 
using ConstraintGroup =SmallVector<ConstraintPair >
 
- Public Types inherited fromllvm::TargetLoweringBase
enum  LegalizeAction : uint8_t {
  Legal,Promote,Expand,LibCall,
  Custom
}
 This enum indicates whether operations are valid for a target, and if not, what action should be used to make them valid.More...
 
enum  LegalizeTypeAction : uint8_t {
  TypeLegal,TypePromoteInteger,TypeExpandInteger,TypeSoftenFloat,
  TypeExpandFloat,TypeScalarizeVector,TypeSplitVector,TypeWidenVector,
  TypePromoteFloat,TypeSoftPromoteHalf,TypeScalarizeScalableVector
}
 This enum indicates whether a types are legal for a target, and if not, what action should be used to make them valid.More...
 
enum  BooleanContent {UndefinedBooleanContent,ZeroOrOneBooleanContent,ZeroOrNegativeOneBooleanContent }
 Enum that describes how the target represents true/false values.More...
 
enum  SelectSupportKind {ScalarValSelect,ScalarCondVectorVal,VectorMaskSelect }
 Enum that describes what type of support for selects the target has.More...
 
enum class  AtomicExpansionKind {
  None,CastToInteger,LLSC,LLOnly,
  CmpXChg,MaskedIntrinsic,BitTestIntrinsic,CmpArithIntrinsic,
  Expand,NotAtomic
}
 Enum that specifies what an atomic load/AtomicRMWInst is expanded to, if at all.More...
 
enum class  MulExpansionKind {Always,OnlyLegalOrCustom }
 Enum that specifies when a multiplication should be expanded.More...
 
enum class  NegatibleCost {Cheaper = 0,Neutral = 1,Expensive = 2 }
 Enum that specifies when a float negation is beneficial.More...
 
enum  AndOrSETCCFoldKind : uint8_t {None = 0,AddAnd = 1,NotAnd = 2,ABS = 4 }
 Enum of different potentially desirable ways to fold (and/or (setcc ...), (setcc ...)).More...
 
enum  ReciprocalEstimate : int {Unspecified = -1,Disabled = 0,Enabled = 1 }
 Reciprocal estimate status values used by the functions below.More...
 
enum class  ShiftLegalizationStrategy {ExpandToParts,ExpandThroughStack,LowerToLibcall }
 Return the preferred strategy to legalize tihs SHIFT instruction, withExpansionFactor being the recursion depth - how many expansion needed.More...
 
using LegalizeKind = std::pair<LegalizeTypeAction,EVT >
 LegalizeKind holds the legalization kind that needs to happen toEVT in order to type-legalize it.
 
using ArgListTy = std::vector<ArgListEntry >
 
- Protected Member Functions inherited fromllvm::TargetLoweringBase
void initActions ()
 Initialize all of the actions to default values.
 
ValuegetDefaultSafeStackPointerLocation (IRBuilderBase &IRB,bool UseTLS)const
 
void setBooleanContents (BooleanContent Ty)
 Specify how the target extends the result of integer and floating point boolean values from i1 to a wider type.
 
void setBooleanContents (BooleanContent IntTy,BooleanContent FloatTy)
 Specify how the target extends the result of integer and floating point boolean values from i1 to a wider type.
 
void setBooleanVectorContents (BooleanContent Ty)
 Specify how the target extends the result of a vector boolean value from a vector of i1 to a wider type.
 
void setSchedulingPreference (Sched::Preference Pref)
 Specify the target scheduling preference.
 
void setMinimumJumpTableEntries (unsigned Val)
 Indicate the minimum number of blocks to generate jump tables.
 
void setMaximumJumpTableSize (unsigned)
 Indicate the maximum number of entries in jump tables.
 
void setStackPointerRegisterToSaveRestore (Register R)
 If set to a physical register, this specifies the register that llvm.savestack/llvm.restorestack should save and restore.
 
void setHasMultipleConditionRegisters (bool hasManyRegs=true)
 Tells the code generator that the target has multiple (allocatable) condition registers that can be used to store the results of comparisons for use by selects and conditional branches.
 
void setHasExtractBitsInsn (bool hasExtractInsn=true)
 Tells the code generator that the target has BitExtract instructions.
 
void setJumpIsExpensive (bool isExpensive=true)
 Tells the code generator not to expand logic operations on comparison predicates into separate sequences that increase the amount of flow control.
 
void addBypassSlowDiv (unsigned int SlowBitWidth,unsigned int FastBitWidth)
 Tells the code generator which bitwidths to bypass.
 
void addRegisterClass (MVT VT,constTargetRegisterClass *RC)
 Add the specified register class as an available regclass for the specified value type.
 
virtual std::pair<constTargetRegisterClass *,uint8_tfindRepresentativeClass (constTargetRegisterInfo *TRI,MVT VT)const
 Return the largest legal super-reg register class of the register class for the specified type and its associated "cost".
 
void computeRegisterProperties (constTargetRegisterInfo *TRI)
 Once all of the register classes are added, this allows us to compute derived properties we expose.
 
void setOperationAction (unsignedOp,MVT VT,LegalizeAction Action)
 Indicate that the specified operation does not work with the specified type and indicate what to do about it.
 
void setOperationAction (ArrayRef<unsigned > Ops,MVT VT,LegalizeAction Action)
 
void setOperationAction (ArrayRef<unsigned > Ops,ArrayRef<MVT > VTs,LegalizeAction Action)
 
void setLoadExtAction (unsigned ExtType,MVT ValVT,MVT MemVT,LegalizeAction Action)
 Indicate that the specified load with extension does not work with the specified type and indicate what to do about it.
 
void setLoadExtAction (ArrayRef<unsigned > ExtTypes,MVT ValVT,MVT MemVT,LegalizeAction Action)
 
void setLoadExtAction (ArrayRef<unsigned > ExtTypes,MVT ValVT,ArrayRef<MVT > MemVTs,LegalizeAction Action)
 
void setAtomicLoadExtAction (unsigned ExtType,MVT ValVT,MVT MemVT,LegalizeAction Action)
 Let target indicate that an extending atomic load of the specified type is legal.
 
void setAtomicLoadExtAction (ArrayRef<unsigned > ExtTypes,MVT ValVT,MVT MemVT,LegalizeAction Action)
 
void setAtomicLoadExtAction (ArrayRef<unsigned > ExtTypes,MVT ValVT,ArrayRef<MVT > MemVTs,LegalizeAction Action)
 
void setTruncStoreAction (MVT ValVT,MVT MemVT,LegalizeAction Action)
 Indicate that the specified truncating store does not work with the specified type and indicate what to do about it.
 
void setIndexedLoadAction (ArrayRef<unsigned > IdxModes,MVT VT,LegalizeAction Action)
 Indicate that the specified indexed load does or does not work with the specified type and indicate what to do abort it.
 
void setIndexedLoadAction (ArrayRef<unsigned > IdxModes,ArrayRef<MVT > VTs,LegalizeAction Action)
 
void setIndexedStoreAction (ArrayRef<unsigned > IdxModes,MVT VT,LegalizeAction Action)
 Indicate that the specified indexed store does or does not work with the specified type and indicate what to do about it.
 
void setIndexedStoreAction (ArrayRef<unsigned > IdxModes,ArrayRef<MVT > VTs,LegalizeAction Action)
 
void setIndexedMaskedLoadAction (unsigned IdxMode,MVT VT,LegalizeAction Action)
 Indicate that the specified indexed masked load does or does not work with the specified type and indicate what to do about it.
 
void setIndexedMaskedStoreAction (unsigned IdxMode,MVT VT,LegalizeAction Action)
 Indicate that the specified indexed masked store does or does not work with the specified type and indicate what to do about it.
 
void setCondCodeAction (ArrayRef<ISD::CondCode > CCs,MVT VT,LegalizeAction Action)
 Indicate that the specified condition code is or isn't supported on the target and indicate what to do about it.
 
void setCondCodeAction (ArrayRef<ISD::CondCode > CCs,ArrayRef<MVT > VTs,LegalizeAction Action)
 
void AddPromotedToType (unsigned Opc,MVT OrigVT,MVT DestVT)
 If Opc/OrigVT is specified as being promoted, the promotion code defaults to trying a larger integer/fp until it can find one that works.
 
void setOperationPromotedToType (unsigned Opc,MVT OrigVT,MVT DestVT)
 Convenience method to set an operation to Promote and specify the type in a single call.
 
void setOperationPromotedToType (ArrayRef<unsigned > Ops,MVT OrigVT,MVT DestVT)
 
void setTargetDAGCombine (ArrayRef<ISD::NodeType > NTs)
 Targets should invoke this method for each target independent node that they want to provide a custom DAG combiner for by implementing the PerformDAGCombine virtual method.
 
void setMinFunctionAlignment (Align Alignment)
 Set the target's minimum function alignment.
 
void setPrefFunctionAlignment (Align Alignment)
 Set the target's preferred function alignment.
 
void setPrefLoopAlignment (Align Alignment)
 Set the target's preferred loop alignment.
 
void setMaxBytesForAlignment (unsigned MaxBytes)
 
void setMinStackArgumentAlignment (Align Alignment)
 Set the minimum stack alignment of an argument.
 
void setMaxAtomicSizeInBitsSupported (unsigned SizeInBits)
 Set the maximum atomic operation size supported by the backend.
 
void setMaxDivRemBitWidthSupported (unsigned SizeInBits)
 Set the size in bits of the maximum div/rem the backend supports.
 
void setMaxLargeFPConvertBitWidthSupported (unsigned SizeInBits)
 Set the size in bits of the maximum fp convert the backend supports.
 
void setMinCmpXchgSizeInBits (unsigned SizeInBits)
 Sets the minimum cmpxchg or ll/sc size supported by the backend.
 
void setSupportsUnalignedAtomics (bool UnalignedSupported)
 Sets whether unaligned atomic operations are supported.
 
virtualbool isExtFreeImpl (constInstruction *I)const
 Return true if the extension represented byI is free.
 
bool isLegalRC (constTargetRegisterInfo &TRI,constTargetRegisterClass &RC)const
 Return true if the value types that can be represented by the specified register class are all legal.
 
MachineBasicBlockemitPatchPoint (MachineInstr &MI,MachineBasicBlock *MBB)const
 Replace/modify any TargetFrameIndex operands with a targte-dependent sequence of memory operands that is recognized by PrologEpilogInserter.
 
- Protected Attributes inherited fromllvm::TargetLoweringBase
unsigned GatherAllAliasesMaxDepth
 Depth that GatherAllAliases should continue looking for chain dependencies when trying to find a more preferable chain.
 
unsigned MaxStoresPerMemset
 Specify maximum number of store instructions per memset call.
 
unsigned MaxStoresPerMemsetOptSize
 Likewise for functions with the OptSize attribute.
 
unsigned MaxStoresPerMemcpy
 Specify maximum number of store instructions per memcpy call.
 
unsigned MaxStoresPerMemcpyOptSize
 Likewise for functions with the OptSize attribute.
 
unsigned MaxGluedStoresPerMemcpy = 0
 Specify max number of store instructions to glue in inlined memcpy.
 
unsigned MaxLoadsPerMemcmp
 Specify maximum number of load instructions per memcmp call.
 
unsigned MaxLoadsPerMemcmpOptSize
 Likewise for functions with the OptSize attribute.
 
unsigned MaxStoresPerMemmove
 Specify maximum number of store instructions per memmove call.
 
unsigned MaxStoresPerMemmoveOptSize
 Likewise for functions with the OptSize attribute.
 
bool PredictableSelectIsExpensive
 Tells the code generator that select is more expensive than a branch if the branch is usually predicted right.
 
bool EnableExtLdPromotion
 
bool IsStrictFPEnabled
 

Detailed Description

Definition at line510 of fileRISCVISelLowering.h.

Constructor & Destructor Documentation

◆ RISCVTargetLowering()

RISCVTargetLowering::RISCVTargetLowering(constTargetMachineTM,
constRISCVSubtargetSTI 
)
explicit

Definition at line81 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ABDS,llvm::ISD::ABDU,llvm::RISCVABI::ABI_ILP32,llvm::RISCVABI::ABI_ILP32D,llvm::RISCVABI::ABI_ILP32E,llvm::RISCVABI::ABI_ILP32F,llvm::RISCVABI::ABI_LP64,llvm::RISCVABI::ABI_LP64D,llvm::RISCVABI::ABI_LP64E,llvm::RISCVABI::ABI_LP64F,llvm::RISCVABI::ABI_Unknown,llvm::ISD::ABS,llvm::ISD::ADD,llvm::TargetLoweringBase::addRegisterClass(),llvm::ISD::ADJUST_TRAMPOLINE,llvm::ISD::AND,llvm::ISD::ANY_EXTEND,assert(),llvm::ISD::ATOMIC_CMP_SWAP,llvm::ISD::ATOMIC_FENCE,llvm::ISD::ATOMIC_LOAD_ADD,llvm::ISD::ATOMIC_LOAD_AND,llvm::ISD::ATOMIC_LOAD_MAX,llvm::ISD::ATOMIC_LOAD_MIN,llvm::ISD::ATOMIC_LOAD_NAND,llvm::ISD::ATOMIC_LOAD_OR,llvm::ISD::ATOMIC_LOAD_SUB,llvm::ISD::ATOMIC_LOAD_UMAX,llvm::ISD::ATOMIC_LOAD_UMIN,llvm::ISD::ATOMIC_LOAD_XOR,llvm::ISD::ATOMIC_SWAP,llvm::ISD::AVGCEILS,llvm::ISD::AVGCEILU,llvm::ISD::AVGFLOORS,llvm::ISD::AVGFLOORU,llvm::ISD::BF16_TO_FP,llvm::ISD::BITCAST,llvm::ISD::BITREVERSE,llvm::ISD::BlockAddress,llvm::ISD::BR_CC,llvm::ISD::BR_JT,llvm::ISD::BRCOND,llvm::ISD::BSWAP,llvm::ISD::BUILD_VECTOR,llvm::ISD::BUILTIN_OP_END,llvm::ISD::CLEAR_CACHE,llvm::TargetLoweringBase::computeRegisterProperties(),llvm::ISD::CONCAT_VECTORS,llvm::ISD::Constant,llvm::ISD::ConstantFP,llvm::ISD::ConstantPool,llvm::ISD::CTLZ,llvm::ISD::CTLZ_ZERO_UNDEF,llvm::ISD::CTPOP,llvm::ISD::CTTZ,llvm::ISD::CTTZ_ZERO_UNDEF,llvm::TargetLoweringBase::Custom,llvm::ISD::DEBUGTRAP,llvm::ISD::DYNAMIC_STACKALLOC,llvm::ISD::EH_DWARF_CFA,llvm::TargetLoweringBase::EnableExtLdPromotion,llvm::errs(),llvm::TargetLoweringBase::Expand,llvm::ISD::EXTLOAD,llvm::ISD::EXTRACT_SUBVECTOR,llvm::ISD::EXTRACT_VECTOR_ELT,llvm::ISD::FABS,llvm::ISD::FADD,llvm::ISD::FCANONICALIZE,llvm::ISD::FCEIL,llvm::ISD::FCOPYSIGN,llvm::ISD::FCOS,llvm::ISD::FDIV,llvm::ISD::FEXP,llvm::ISD::FEXP10,llvm::ISD::FEXP2,llvm::ISD::FFLOOR,llvm::ISD::FFREXP,llvm::ISD::FLDEXP,llvm::ISD::FLOG,llvm::ISD::FLOG10,llvm::ISD::FLOG2,llvm::ISD::FMA,llvm::ISD::FMAXIMUM,llvm::ISD::FMAXIMUMNUM,llvm::ISD::FMAXNUM,llvm::ISD::FMINIMUM,llvm::ISD::FMINIMUMNUM,llvm::ISD::FMINNUM,llvm::ISD::FMUL,llvm::ISD::FNEARBYINT,llvm::ISD::FNEG,llvm::ISD::FP16_TO_FP,llvm::ISD::FP_EXTEND,llvm::MVT::fp_fixedlen_vector_valuetypes(),llvm::ISD::FP_ROUND,llvm::ISD::FP_TO_BF16,llvm::ISD::FP_TO_FP16,llvm::ISD::FP_TO_SINT,llvm::ISD::FP_TO_SINT_SAT,llvm::ISD::FP_TO_UINT,llvm::ISD::FP_TO_UINT_SAT,llvm::ISD::FPOW,llvm::ISD::FPOWI,llvm::ISD::FREM,llvm::ISD::FRINT,llvm::ISD::FROUND,llvm::ISD::FROUNDEVEN,llvm::ISD::FSIN,llvm::ISD::FSINCOS,llvm::ISD::FSQRT,llvm::ISD::FSUB,llvm::ISD::FTRUNC,llvm::ISD::GET_ROUNDING,getContainerForFixedLengthVector(),llvm::RISCVSubtarget::getELen(),getLMUL(),llvm::RISCVSubtarget::getMaxGluedStoresPerMemcpy(),llvm::RISCVSubtarget::getMaxLoadsPerMemcmp(),llvm::RISCVSubtarget::getMaxStoresPerMemcpy(),llvm::RISCVSubtarget::getMaxStoresPerMemmove(),llvm::RISCVSubtarget::getMaxStoresPerMemset(),llvm::RISCVSubtarget::getPrefFunctionAlignment(),llvm::RISCVSubtarget::getPrefLoopAlignment(),getRegClassIDForVecVT(),llvm::RISCVSubtarget::getRegisterInfo(),llvm::RISCVSubtarget::getTargetABI(),llvm::TargetLoweringBase::getTargetMachine(),llvm::MVT::getVectorElementCount(),llvm::MVT::getVectorElementType(),llvm::MVT::getVectorVT(),llvm::RISCVSubtarget::getXLen(),llvm::RISCVSubtarget::getXLenVT(),llvm::ISD::GlobalAddress,llvm::ISD::GlobalTLSAddress,llvm::RISCVSubtarget::hasStdExtCOrZca(),llvm::RISCVSubtarget::hasStdExtDOrZdinx(),llvm::RISCVSubtarget::hasStdExtFOrZfinx(),llvm::RISCVSubtarget::hasStdExtZfhminOrZhinxmin(),llvm::RISCVSubtarget::hasStdExtZfhOrZhinx(),llvm::RISCVSubtarget::hasVInstructions(),llvm::RISCVSubtarget::hasVInstructionsBF16Minimal(),llvm::RISCVSubtarget::hasVInstructionsF16(),llvm::RISCVSubtarget::hasVInstructionsF16Minimal(),llvm::RISCVSubtarget::hasVInstructionsF32(),llvm::RISCVSubtarget::hasVInstructionsF64(),llvm::RISCVSubtarget::hasVInstructionsI64(),im,llvm::ISD::INIT_TRAMPOLINE,llvm::ISD::INSERT_SUBVECTOR,llvm::ISD::INSERT_VECTOR_ELT,llvm::MVT::integer_fixedlen_vector_valuetypes(),llvm::MVT::integer_scalable_vector_valuetypes(),llvm::ISD::INTRINSIC_VOID,llvm::ISD::INTRINSIC_W_CHAIN,llvm::ISD::INTRINSIC_WO_CHAIN,llvm::RISCVSubtarget::is64Bit(),llvm::ISD::IS_FPCLASS,llvm::RISCVSubtarget::isSoftFPABI(),llvm::TargetLoweringBase::IsStrictFPEnabled,llvm::TargetLoweringBase::isTypeLegal(),llvm::ISD::JumpTable,llvm::IRSimilarity::Legal,llvm::TargetLoweringBase::LibCall,llvm::ISD::LLRINT,llvm::ISD::LLROUND,llvm_unreachable,llvm::RISCVII::LMUL_8,llvm::ISD::LOAD,llvm::ISD::LRINT,llvm::ISD::LROUND,llvm::TargetLoweringBase::MaxGluedStoresPerMemcpy,llvm::TargetLoweringBase::MaxLoadsPerMemcmp,llvm::TargetLoweringBase::MaxLoadsPerMemcmpOptSize,llvm::TargetLoweringBase::MaxStoresPerMemcpy,llvm::TargetLoweringBase::MaxStoresPerMemcpyOptSize,llvm::TargetLoweringBase::MaxStoresPerMemmove,llvm::TargetLoweringBase::MaxStoresPerMemmoveOptSize,llvm::TargetLoweringBase::MaxStoresPerMemset,llvm::TargetLoweringBase::MaxStoresPerMemsetOptSize,llvm::ISD::MGATHER,llvm::ISD::MLOAD,llvm::ISD::MSCATTER,llvm::ISD::MSTORE,llvm::ISD::MUL,llvm::ISD::MULHS,llvm::ISD::MULHU,llvm::ISD::OR,llvm::ISD::POST_INC,llvm::ISD::PRE_INC,llvm::TargetLoweringBase::PredictableSelectIsExpensive,llvm::ISD::PREFETCH,llvm::TargetLoweringBase::Promote,llvm::ISD::READCYCLECOUNTER,llvm::ISD::READSTEADYCOUNTER,llvm::report_fatal_error(),llvm::ISD::ROTL,llvm::ISD::ROTR,llvm::RISCV::RVVBitsPerBlock,llvm::ISD::SADDO,llvm::ISD::SADDSAT,llvm::ISD::SCALAR_TO_VECTOR,llvm::ISD::SDIV,llvm::ISD::SDIVREM,llvm::ISD::SELECT,llvm::ISD::SELECT_CC,llvm::ISD::SET_ROUNDING,llvm::TargetLoweringBase::setBooleanContents(),llvm::TargetLoweringBase::setBooleanVectorContents(),llvm::ISD::SETCC,llvm::TargetLoweringBase::setCondCodeAction(),llvm::ISD::SETGE,llvm::ISD::SETGT,llvm::TargetLoweringBase::setIndexedLoadAction(),llvm::TargetLoweringBase::setIndexedStoreAction(),llvm::ISD::SETLE,llvm::TargetLoweringBase::setLibcallName(),llvm::TargetLoweringBase::setLoadExtAction(),llvm::TargetLoweringBase::setMaxAtomicSizeInBitsSupported(),llvm::TargetLoweringBase::setMinCmpXchgSizeInBits(),llvm::TargetLoweringBase::setMinFunctionAlignment(),llvm::ISD::SETNE,llvm::ISD::SETO,llvm::ISD::SETOGE,llvm::ISD::SETOGT,llvm::ISD::SETONE,llvm::TargetLoweringBase::setOperationAction(),llvm::TargetLoweringBase::setOperationPromotedToType(),llvm::TargetLoweringBase::setPrefFunctionAlignment(),llvm::TargetLoweringBase::setPrefLoopAlignment(),llvm::TargetLoweringBase::setStackPointerRegisterToSaveRestore(),llvm::TargetLoweringBase::setTargetDAGCombine(),llvm::TargetLoweringBase::setTruncStoreAction(),llvm::ISD::SETUEQ,llvm::ISD::SETUGE,llvm::ISD::SETUGT,llvm::ISD::SETULE,llvm::ISD::SETULT,llvm::ISD::SETUNE,llvm::ISD::SETUO,llvm::ISD::SEXTLOAD,llvm::ISD::SHL,llvm::ISD::SHL_PARTS,llvm::ISD::SIGN_EXTEND,llvm::ISD::SIGN_EXTEND_INREG,llvm::ISD::SINT_TO_FP,Size,llvm::ISD::SMAX,llvm::ISD::SMIN,llvm::ISD::SMUL_LOHI,llvm::ISD::SPLAT_VECTOR,llvm::ISD::SPLAT_VECTOR_PARTS,llvm::ISD::SRA,llvm::ISD::SRA_PARTS,llvm::ISD::SREM,llvm::ISD::SRL,llvm::ISD::SRL_PARTS,llvm::ISD::SSUBSAT,llvm::ISD::STACKRESTORE,llvm::ISD::STACKSAVE,llvm::ISD::STEP_VECTOR,llvm::ISD::STORE,llvm::ISD::STRICT_FADD,llvm::ISD::STRICT_FCEIL,llvm::ISD::STRICT_FDIV,llvm::ISD::STRICT_FFLOOR,llvm::ISD::STRICT_FLDEXP,llvm::ISD::STRICT_FMA,llvm::ISD::STRICT_FMUL,llvm::ISD::STRICT_FNEARBYINT,llvm::ISD::STRICT_FP16_TO_FP,llvm::ISD::STRICT_FP_EXTEND,llvm::ISD::STRICT_FP_ROUND,llvm::ISD::STRICT_FP_TO_FP16,llvm::ISD::STRICT_FP_TO_SINT,llvm::ISD::STRICT_FP_TO_UINT,llvm::ISD::STRICT_FRINT,llvm::ISD::STRICT_FROUND,llvm::ISD::STRICT_FROUNDEVEN,llvm::ISD::STRICT_FSETCC,llvm::ISD::STRICT_FSETCCS,llvm::ISD::STRICT_FSQRT,llvm::ISD::STRICT_FSUB,llvm::ISD::STRICT_FTRUNC,llvm::ISD::STRICT_LLRINT,llvm::ISD::STRICT_LLROUND,llvm::ISD::STRICT_LRINT,llvm::ISD::STRICT_LROUND,llvm::ISD::STRICT_SINT_TO_FP,llvm::ISD::STRICT_UINT_TO_FP,llvm::ISD::SUB,llvm::ISD::TRAP,TRI,llvm::ISD::TRUNCATE,llvm::ISD::TRUNCATE_SSAT_S,llvm::ISD::TRUNCATE_USAT_U,llvm::ISD::UADDO,llvm::ISD::UADDSAT,llvm::ISD::UDIV,llvm::ISD::UDIVREM,llvm::ISD::UINT_TO_FP,llvm::ISD::UMAX,llvm::ISD::UMIN,llvm::ISD::UMUL_LOHI,llvm::ISD::UNDEF,llvm::ISD::UREM,llvm::RISCVSubtarget::useCCMovInsn(),llvm::RISCVSubtarget::useRVVForFixedLengthVectors(),llvm::ISD::USUBO,llvm::ISD::USUBSAT,llvm::ISD::VAARG,llvm::ISD::VACOPY,llvm::ISD::VAEND,llvm::ISD::VASTART,llvm::ISD::VECREDUCE_ADD,llvm::ISD::VECREDUCE_AND,llvm::ISD::VECREDUCE_FADD,llvm::ISD::VECREDUCE_FMAX,llvm::ISD::VECREDUCE_FMAXIMUM,llvm::ISD::VECREDUCE_FMIN,llvm::ISD::VECREDUCE_FMINIMUM,llvm::ISD::VECREDUCE_OR,llvm::ISD::VECREDUCE_SEQ_FADD,llvm::ISD::VECREDUCE_SMAX,llvm::ISD::VECREDUCE_SMIN,llvm::ISD::VECREDUCE_UMAX,llvm::ISD::VECREDUCE_UMIN,llvm::ISD::VECREDUCE_XOR,llvm::ISD::VECTOR_COMPRESS,llvm::ISD::VECTOR_DEINTERLEAVE,llvm::ISD::VECTOR_INTERLEAVE,llvm::ISD::VECTOR_REVERSE,llvm::ISD::VECTOR_SHUFFLE,llvm::ISD::VECTOR_SPLICE,llvm::ISD::VSCALE,llvm::ISD::VSELECT,llvm::ISD::XOR,llvm::ISD::ZERO_EXTEND,llvm::TargetLoweringBase::ZeroOrOneBooleanContent, andllvm::ISD::ZEXTLOAD.

Member Function Documentation

◆ AdjustInstrPostInstrSelection()

void RISCVTargetLowering::AdjustInstrPostInstrSelection(MachineInstrMI,
SDNodeNode 
) const
overridevirtual

This method should be implemented by targets that mark instructions with the 'hasPostISelHook' flag.

These instructions must be adjusted after instruction selection by target hooks. e.g. To fill in optional defs forARM 's' setting instructions.

Reimplemented fromllvm::TargetLowering.

Definition at line19926 of fileRISCVISelLowering.cpp.

Referencesllvm::MachineOperand::CreateReg(),llvm::RISCVFPRndMode::DYN,llvm::RISCVII::getFRMOpNum(),llvm::RISCV::getNamedOperandIdx(),Idx, andMI.

◆ allowsMisalignedMemoryAccesses()

bool RISCVTargetLowering::allowsMisalignedMemoryAccesses(EVT VT,
unsigned AddrSpace =0,
Align Alignment =Align(1),
MachineMemOperand::Flags Flags =MachineMemOperand::MONone,
unsignedFast =nullptr 
) const
overridevirtual

Returns true if the target allows unaligned memory accesses of the specified type.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21985 of fileRISCVISelLowering.cpp.

Referencesllvm::CallingConv::Fast,llvm::EVT::getStoreSize(),llvm::EVT::getVectorElementType(), andllvm::EVT::isVector().

◆ areTwoSDNodeTargetMMOFlagsMergeable()

bool RISCVTargetLowering::areTwoSDNodeTargetMMOFlagsMergeable(constMemSDNodeNodeX,
constMemSDNodeNodeY 
) const
overridevirtual

Return true if it is valid to merge the TargetMMOFlags in two SDNodes.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22693 of fileRISCVISelLowering.cpp.

ReferencesgetTargetMMOFlags().

◆ canCreateUndefOrPoisonForTargetNode()

bool RISCVTargetLowering::canCreateUndefOrPoisonForTargetNode(SDValue Op,
constAPIntDemandedElts,
constSelectionDAGDAG,
bool PoisonOnly,
bool ConsiderFlags,
unsigned Depth 
) const
overridevirtual

Return true if Op can create undef or poison from non-undef & non-poison operands.

The DemandedElts argument limits the check to the requested vector elements.

Reimplemented fromllvm::TargetLowering.

Definition at line19116 of fileRISCVISelLowering.cpp.

Referencesllvm::TargetLowering::canCreateUndefOrPoisonForTargetNode(),llvm::Depth,PoisonOnly, andllvm::RISCVISD::SELECT_CC.

◆ CanLowerReturn()

bool RISCVTargetLowering::CanLowerReturn(CallingConv::ID ,
MachineFunction,
bool ,
constSmallVectorImpl<ISD::OutputArg > & ,
LLVMContext,
constTypeRetTy 
) const
overridevirtual

This hook should be implemented to check whether the return values described by the Outs array can fit into the return registers.

If false is returned, an sret-demotion is performed.

Reimplemented fromllvm::TargetLowering.

Definition at line20702 of fileRISCVISelLowering.cpp.

Referencesllvm::CC_RISCV(),llvm::CCValAssign::Full, andllvm::SmallVectorBase< Size_T >::size().

◆ computeKnownBitsForTargetNode()

void RISCVTargetLowering::computeKnownBitsForTargetNode(constSDValue Op,
KnownBitsKnown,
constAPIntDemandedElts,
constSelectionDAGDAG,
unsigned Depth 
) const
overridevirtual

Determine which of the bits specified in Mask are known to be either zero or one and return them in the KnownZero/KnownOne bitsets.

Determine which of the bits specified in Mask are known to be either zero or one and return them in the Known.

The DemandedElts argument allows us to only collect the known bits that are shared by the requested vector elements.

Reimplemented fromllvm::TargetLowering.

Definition at line18889 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::bit_width(),llvm::BitWidth,llvm::RISCVISD::BREV8,llvm::ISD::BUILTIN_OP_END,llvm::APInt::clearAllBits(),llvm::RISCVISD::CLZW,computeGREVOrGORC(),llvm::SelectionDAG::computeKnownBits(),llvm::KnownBits::countMaxLeadingZeros(),llvm::KnownBits::countMaxTrailingZeros(),llvm::RISCVISD::CTZW,llvm::RISCVISD::CZERO_EQZ,llvm::RISCVISD::CZERO_NEZ,llvm::RISCVVType::decodeVLMUL(),llvm::RISCVVType::decodeVSEW(),llvm::Depth,llvm::RISCVISD::DIVUW,llvm::RISCVISD::FCLASS,llvm::KnownBits::getBitWidth(),llvm::RISCVSubtarget::getRealMaxVLen(),llvm::RISCVSubtarget::getRealMinVLen(),llvm::APInt::getZExtValue(),llvm::KnownBits::intersectWith(),llvm::ISD::INTRINSIC_VOID,llvm::ISD::INTRINSIC_W_CHAIN,llvm::ISD::INTRINSIC_WO_CHAIN,llvm::KnownBits::isUnknown(),llvm::Log2_32(),llvm::KnownBits::One,llvm::RISCVISD::ORC_B,llvm::RISCVISD::READ_VLENB,llvm::RISCVISD::REMUW,llvm::KnownBits::resetAll(),llvm::RISCVISD::SELECT_CC,llvm::APInt::setBit(),llvm::APInt::setBitsFrom(),llvm::APInt::setLowBits(),llvm::KnownBits::sext(),llvm::KnownBits::shl(),llvm::RISCVISD::SLLW,llvm::KnownBits::trunc(),llvm::KnownBits::udiv(),llvm::KnownBits::urem(),llvm::KnownBits::Zero, andllvm::KnownBits::zext().

◆ ComputeNumSignBitsForTargetNode()

unsigned RISCVTargetLowering::ComputeNumSignBitsForTargetNode(SDValue Op,
constAPIntDemandedElts,
constSelectionDAGDAG,
unsigned Depth 
) const
overridevirtual

This method can be implemented by targets that want to expose additional information about sign bits to the DAGCombiner.

The DemandedElts argument allows us to only collect the minimum sign bits that are shared by the requested vector elements.

Reimplemented fromllvm::TargetLowering.

Definition at line19030 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVISD::ABSW,assert(),llvm::SelectionDAG::ComputeNumSignBits(),llvm::RISCVISD::CZERO_EQZ,llvm::RISCVISD::CZERO_NEZ,llvm::Depth,llvm::RISCVISD::DIVUW,llvm::RISCVISD::DIVW,llvm::RISCVISD::FCVT_W_RV64,llvm::RISCVISD::FCVT_WU_RV64,llvm::TargetLoweringBase::getMinCmpXchgSizeInBits(),llvm::RISCVSubtarget::getXLen(),llvm::ISD::INTRINSIC_W_CHAIN,llvm::RISCVISD::REMUW,llvm::RISCVISD::ROLW,llvm::RISCVISD::RORW,llvm::RISCVISD::SELECT_CC,llvm::RISCVISD::SLLW,llvm::RISCVISD::SRAW,llvm::RISCVISD::SRLW,llvm::RISCVISD::STRICT_FCVT_W_RV64,llvm::RISCVISD::STRICT_FCVT_WU_RV64, andllvm::RISCVISD::VMV_X_S.

◆ computeVLMax()

SDValue RISCVTargetLowering::computeVLMax(MVT VecVT,
constSDLocDL,
SelectionDAGDAG 
) const

Definition at line2787 of fileRISCVISelLowering.cpp.

Referencesassert(),DL,llvm::SelectionDAG::getElementCount(),llvm::MVT::getVectorElementCount(),llvm::RISCVSubtarget::getXLenVT(), andllvm::MVT::isScalableVector().

◆ computeVLMAX()

staticunsigned llvm::RISCVTargetLowering::computeVLMAX(unsigned VectorBits,
unsigned EltSize,
unsigned MinSize 
)
inlinestatic

Definition at line827 of fileRISCVISelLowering.h.

Referencesllvm::RISCV::RVVBitsPerBlock.

Referenced bycomputeVLMAXBounds().

◆ computeVLMAXBounds()

std::pair<unsigned,unsigned > RISCVTargetLowering::computeVLMAXBounds(MVT ContainerVT,
constRISCVSubtargetSubtarget 
)
static

Definition at line2795 of fileRISCVISelLowering.cpp.

Referencesassert(),computeVLMAX(),llvm::details::FixedOrScalableQuantity< LeafTy, ValueTy >::getKnownMinValue(),llvm::RISCVSubtarget::getRealMaxVLen(),llvm::RISCVSubtarget::getRealMinVLen(),llvm::MVT::getScalarSizeInBits(),llvm::MVT::getSizeInBits(), andllvm::MVT::isScalableVector().

Referenced bylowerVectorIntrinsicScalars().

◆ convertSelectOfConstantsToMath()

bool llvm::RISCVTargetLowering::convertSelectOfConstantsToMath(EVT VT) const
inlineoverridevirtual

Return true if a select of constants (select Cond, C1, C2) should be transformed into simple math ops with the condition value.

For example: select Cond, C1, C1-1 --> add (zext Cond), C1-1

Reimplemented fromllvm::TargetLoweringBase.

Definition at line698 of fileRISCVISelLowering.h.

◆ convertSetCCLogicToBitwiseLogic()

bool llvm::RISCVTargetLowering::convertSetCCLogicToBitwiseLogic(EVT VT) const
inlineoverridevirtual

Use bitwise logic to make pairs of compares more efficient.

For example: and (seteq A, B), (seteq C, D) --> seteq (or (xor A, B), (xor C, D)), 0 This should be true when it takes more than one instruction to lower setcc (cmp+set on x86 scalar), when bitwise ops are faster than logic on condition bits (crand on PowerPC), and/or when reducing cmp+br is a win.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line695 of fileRISCVISelLowering.h.

Referencesllvm::EVT::isScalarInteger().

◆ decomposeMulByConstant()

bool RISCVTargetLowering::decomposeMulByConstant(LLVMContextContext,
EVT VT,
SDValue C 
) const
overridevirtual

Return true if it is profitable to transform an integer multiplication-by-constant into simpler operations like shifts and adds.

This may be true if the target does not directly support the multiplication operation for the specified type or the sequence of simpler ops is faster than the multiply.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21923 of fileRISCVISelLowering.cpp.

Referencesllvm::CallingConv::C,llvm::EVT::getSizeInBits(),llvm::RISCVSubtarget::getXLen(), andllvm::EVT::isScalarInteger().

◆ decomposeSubvectorInsertExtractToSubRegs()

std::pair<unsigned,unsigned > RISCVTargetLowering::decomposeSubvectorInsertExtractToSubRegs(MVT VecVT,
MVT SubVecVT,
unsigned InsertExtractIdx,
constRISCVRegisterInfoTRI 
)
static

Definition at line2498 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::MVT::getHalfNumVectorElementsVT(),llvm::details::FixedOrScalableQuantity< LeafTy, ValueTy >::getKnownMinValue(),getLMUL(),getRegClassIDForVecVT(),getSubregIndexByMVT(),llvm::MVT::getVectorElementCount(),llvm::MVT::isRISCVVectorTuple(),llvm::MVT::isScalableVector(), andTRI.

Referenced byllvm::RISCVDAGToDAGISel::Select().

◆ emitDynamicProbedAlloc()

MachineBasicBlock * RISCVTargetLowering::emitDynamicProbedAlloc(MachineInstrMI,
MachineBasicBlockMBB 
) const

Definition at line22905 of fileRISCVISelLowering.cpp.

Referencesllvm::MachineInstrBuilder::addImm(),llvm::MachineInstrBuilder::addMBB(),llvm::MachineInstrBuilder::addReg(),llvm::MachineBasicBlock::addSuccessor(),llvm::MachineBasicBlock::begin(),llvm::BuildMI(),llvm::MachineFunction::CreateMachineBasicBlock(),llvm::MachineRegisterInfo::createVirtualRegister(),DL,llvm::MachineBasicBlock::end(),llvm::MachineBasicBlock::findDebugLoc(),llvm::MachineBasicBlock::getBasicBlock(),llvm::RISCVSubtarget::getFrameLowering(),llvm::MachineFunction::getInfo(),llvm::RISCVSubtarget::getInstrInfo(),llvm::ilist_node_impl< OptionsT >::getIterator(),llvm::MachineBasicBlock::getParent(),llvm::MachineFunction::getRegInfo(),llvm::TargetFrameLowering::getStackAlign(),getStackProbeSize(),llvm::RISCVSubtarget::getTargetLowering(),llvm::MachineFunction::insert(),llvm::RISCVSubtarget::is64Bit(),MBB,MBBI,MI,llvm::MachineInstr::NoFlags,llvm::MachineBasicBlock::splice(),SPReg,TII, andllvm::MachineBasicBlock::transferSuccessorsAndUpdatePHIs().

Referenced byEmitInstrWithCustomInserter().

◆ EmitInstrWithCustomInserter()

MachineBasicBlock * RISCVTargetLowering::EmitInstrWithCustomInserter(MachineInstrMI,
MachineBasicBlockMBB 
) const
overridevirtual

This method should be implemented by targets that mark instructions with the 'usesCustomInserter' flag.

These instructions are special in various ways, which require special support to insert. The specifiedMachineInstr is created but not inserted into any basic blocks, and this method is called to expand it into a sequence of instructions, potentially also creating new basic blocks and control flow. As long as the returned basic block is different (i.e., we created a new one), the custom inserter is free to modify the rest ofMBB.

Reimplemented fromllvm::TargetLowering.

Definition at line19829 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::MachineOperand::CreateReg(),emitBuildPairF64Pseudo(),emitDynamicProbedAlloc(),emitFROUND(),llvm::TargetLoweringBase::emitPatchPoint(),emitQuietFCMP(),emitReadCounterWidePseudo(),emitSelectPseudo(),emitSplitF64Pseudo(),emitVFROUND_NOEXCEPT_MASK(),llvm::RISCVSubtarget::is64Bit(),llvm_unreachable,MI, andllvm::report_fatal_error().

◆ EmitKCFICheck()

MachineInstr * RISCVTargetLowering::EmitKCFICheck(MachineBasicBlockMBB,
MachineBasicBlock::instr_iteratorMBBI,
constTargetInstrInfoTII 
) const
overridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22612 of fileRISCVISelLowering.cpp.

Referencesllvm::MachineInstrBuilder::addImm(),llvm::MachineInstrBuilder::addReg(),assert(),llvm::BuildMI(),llvm::MachineInstrBuilder::getInstr(),llvm::is_contained(),MBB,MBBI, andTII.

◆ emitLeadingFence()

Instruction * RISCVTargetLowering::emitLeadingFence(IRBuilderBaseBuilder,
InstructionInst,
AtomicOrdering Ord 
) const
overridevirtual

Inserts in the IR a target-specific intrinsic specifying a fence.

It is called byAtomicExpandPass before expanding an AtomicRMW/AtomicCmpXchg/AtomicStore/AtomicLoad if shouldInsertFencesForAtomic returns true.

Inst is the original atomic instruction, prior to other expansions that may be performed.

This function should either return a nullptr, or a pointer to an IR-level Instruction*. Even complex fence sequences can be represented by a single Instruction* through an intrinsic to be lowered later.

The default implementation emits an IR fence before any release (or stronger) operation that stores, and after any acquire (or stronger) operation. This is generally a correct implementation, but backends may override if they wish to use alternative schemes (e.g. the PowerPC standard ABI uses a fence before a seq_cst load instead of after a seq_cst store).

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21498 of fileRISCVISelLowering.cpp.

Referencesllvm::IRBuilderBase::CreateFence(),llvm::isReleaseOrStronger(),llvm::Release, andllvm::SequentiallyConsistent.

◆ emitMaskedAtomicCmpXchgIntrinsic()

Value * RISCVTargetLowering::emitMaskedAtomicCmpXchgIntrinsic(IRBuilderBaseBuilder,
AtomicCmpXchgInstCI,
ValueAlignedAddr,
ValueCmpVal,
ValueNewVal,
ValueMask,
AtomicOrdering Ord 
) const
overridevirtual

Perform a masked cmpxchg using a target-specific intrinsic.

This represents the core LL/SC loop which will be lowered at a late stage by the backend. The target-specific intrinsic returns the loaded value and is not responsible for masking and shifting the result.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21686 of fileRISCVISelLowering.cpp.

Referencesllvm::IRBuilderBase::CreateIntrinsic(),llvm::IRBuilderBase::CreateSExt(),llvm::IRBuilderBase::CreateTrunc(),llvm::IRBuilderBase::getInt32Ty(),llvm::IRBuilderBase::getInt64Ty(),llvm::IRBuilderBase::getIntN(),llvm::Value::getType(), andllvm::RISCVSubtarget::getXLen().

◆ emitMaskedAtomicRMWIntrinsic()

Value * RISCVTargetLowering::emitMaskedAtomicRMWIntrinsic(IRBuilderBaseBuilder,
AtomicRMWInstAI,
ValueAlignedAddr,
ValueIncr,
ValueMask,
ValueShiftAmt,
AtomicOrdering Ord 
) const
overridevirtual

Perform a masked atomicrmw using a target-specific intrinsic.

This represents the core LL/SC loop which will be lowered at a late stage by the backend. The target-specific intrinsic returns the loaded value and is not responsible for masking and shifting the result.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21613 of fileRISCVISelLowering.cpp.

Referencesllvm::AtomicRMWInst::And,llvm::IRBuilderBase::CreateAtomicRMW(),llvm::IRBuilderBase::CreateCall(),llvm::IRBuilderBase::CreateNot(),llvm::IRBuilderBase::CreateSExt(),llvm::IRBuilderBase::CreateSub(),llvm::IRBuilderBase::CreateTrunc(),DL,llvm::AtomicRMWInst::getAlign(),llvm::Instruction::getDataLayout(),llvm::IRBuilderBase::getInt32Ty(),llvm::IRBuilderBase::getInt64Ty(),llvm::IRBuilderBase::getIntN(),getIntrinsicForMaskedAtomicRMWBinOp(),llvm::Instruction::getModule(),llvm::AtomicRMWInst::getOperation(),llvm::AtomicRMWInst::getOrdering(),llvm::Intrinsic::getOrInsertDeclaration(),llvm::Value::getType(),llvm::AtomicRMWInst::getValOperand(),llvm::RISCVSubtarget::getXLen(),llvm::ConstantInt::isMinusOne(),llvm::ConstantInt::isZero(),llvm::AtomicRMWInst::Max,llvm::AtomicRMWInst::Min,llvm::AtomicRMWInst::Or, andllvm::AtomicRMWInst::Xchg.

◆ emitTrailingFence()

Instruction * RISCVTargetLowering::emitTrailingFence(IRBuilderBaseBuilder,
InstructionInst,
AtomicOrdering Ord 
) const
overridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21514 of fileRISCVISelLowering.cpp.

Referencesllvm::Acquire,llvm::IRBuilderBase::CreateFence(),llvm::isAcquireOrStronger(), andllvm::SequentiallyConsistent.

◆ expandIndirectJTBranch()

SDValue RISCVTargetLowering::expandIndirectJTBranch(constSDLocdl,
SDValue Value,
SDValue Addr,
int JTI,
SelectionDAGDAG 
) const
overridevirtual

Expands target specific indirect branch for the case ofJumpTable expansion.

Reimplemented fromllvm::TargetLowering.

Definition at line22800 of fileRISCVISelLowering.cpp.

ReferencesAddr,llvm::TargetLowering::expandIndirectJTBranch(),llvm::SelectionDAG::getJumpTableDebugInfo(),llvm::SelectionDAG::getNode(),llvm::SelectionDAG::getTarget(),llvm::TargetMachine::getTargetTriple(),llvm::Triple::isOSBinFormatCOFF(), andllvm::RISCVISD::SW_GUARDED_BRIND.

◆ fallBackToDAGISel()

bool RISCVTargetLowering::fallBackToDAGISel(constInstructionInst) const
overridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22741 of fileRISCVISelLowering.cpp.

Referencesllvm::User::getNumOperands(),llvm::Instruction::getOpcode(),llvm::User::getOperand(),llvm::Value::getType(), andllvm::Type::isScalableTy().

◆ getConstraintType()

RISCVTargetLowering::ConstraintType RISCVTargetLowering::getConstraintType(StringRef Constraint) const
overridevirtual

getConstraintType - Given a constraint letter, return the type of constraint it is for this target.

Reimplemented fromllvm::TargetLowering.

Definition at line21148 of fileRISCVISelLowering.cpp.

Referencesllvm::TargetLowering::C_Immediate,llvm::TargetLowering::C_Memory,llvm::TargetLowering::C_Other,llvm::TargetLowering::C_RegisterClass,llvm::TargetLowering::getConstraintType(), andllvm::StringRef::size().

◆ getContainerForFixedLengthVector()

MVT RISCVTargetLowering::getContainerForFixedLengthVector(MVT VT) const

Definition at line2710 of fileRISCVISelLowering.cpp.

ReferencesgetSubtarget().

Referenced byllvm::RISCVTTIImpl::getScalarizationOverhead(),isLegalInterleavedAccessType(),LowerOperation(),ReplaceNodeResults(), andRISCVTargetLowering().

◆ getCustomCtpopCost()

unsigned RISCVTargetLowering::getCustomCtpopCost(EVT VT,
ISD::CondCode Cond 
) const
overridevirtual

Return the maximum number of "x & (x - 1)" operations that can be done instead of deferring to a custom CTPOP.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22707 of fileRISCVISelLowering.cpp.

ReferencesisCtpopFast().

◆ getExceptionPointerRegister()

Register RISCVTargetLowering::getExceptionPointerRegister(constConstantPersonalityFn) const
overridevirtual

If a physical register, this returns the register that receives the exception address on entry to an EH pad.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21895 of fileRISCVISelLowering.cpp.

◆ getExceptionSelectorRegister()

Register RISCVTargetLowering::getExceptionSelectorRegister(constConstantPersonalityFn) const
overridevirtual

If a physical register, this returns the register that receives the exception typeid on entry to a landing pad.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21900 of fileRISCVISelLowering.cpp.

◆ getExtendForAtomicCmpSwapArg()

ISD::NodeType RISCVTargetLowering::getExtendForAtomicCmpSwapArg() const
overridevirtual

Returns how the platform's atomic compare and swap expects its comparison value to be extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).

This is separate from getExtendForAtomicOps, which is concerned with the sign-extension of the instruction's output, whereas here we are concerned with the sign-extension of the input. For targets with compare-and-swap instructions (or sub-word comparisons in their LL/SC loop expansions), the input can be ANY_EXTEND, but the output will still have a specific extension.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21890 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ANY_EXTEND, andllvm::ISD::SIGN_EXTEND.

◆ getExtendForAtomicOps()

ISD::NodeType llvm::RISCVTargetLowering::getExtendForAtomicOps() const
inlineoverridevirtual

Returns how the platform's atomic operations are extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).

Reimplemented fromllvm::TargetLoweringBase.

Definition at line719 of fileRISCVISelLowering.h.

Referencesllvm::ISD::SIGN_EXTEND.

◆ getIndexedAddressParts()

bool RISCVTargetLowering::getIndexedAddressParts(SDNodeOp,
SDValueBase,
SDValueOffset,
ISD::MemIndexedModeAM,
SelectionDAGDAG 
) const

Definition at line21763 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ADD,llvm::sampleprof::Base,llvm::Offset,RHS, andllvm::ISD::SUB.

Referenced bygetPostIndexedAddressParts(), andgetPreIndexedAddressParts().

◆ getInlineAsmMemConstraint()

InlineAsm::ConstraintCode RISCVTargetLowering::getInlineAsmMemConstraint(StringRef ConstraintCode) const
overridevirtual

Reimplemented fromllvm::TargetLowering.

Definition at line21444 of fileRISCVISelLowering.cpp.

Referencesllvm::InlineAsm::A,llvm::TargetLowering::getInlineAsmMemConstraint(), andllvm::StringRef::size().

◆ getIRStackGuard()

Value * RISCVTargetLowering::getIRStackGuard(IRBuilderBaseIRB) const
overridevirtual

If the target has a standard location for the stack protector cookie, returns the address of that location.

Otherwise, returns nullptr.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22245 of fileRISCVISelLowering.cpp.

Referencesllvm::IRBuilderBase::GetInsertBlock(),llvm::TargetLoweringBase::getIRStackGuard(),llvm::BasicBlock::getModule(),llvm::RISCVSubtarget::isTargetAndroid(),llvm::RISCVSubtarget::isTargetFuchsia(),llvm::Offset, anduseTpOffset().

◆ getJumpTableEncoding()

unsigned RISCVTargetLowering::getJumpTableEncoding() const
overridevirtual

Return the entry encoding for a jump table in the current function.

The returned value is a member of theMachineJumpTableInfo::JTEntryKind enum.

Reimplemented fromllvm::TargetLowering.

Definition at line21733 of fileRISCVISelLowering.cpp.

Referencesllvm::MachineJumpTableInfo::EK_Custom32,getCodeModel(),llvm::TargetLowering::getJumpTableEncoding(),llvm::TargetLoweringBase::getTargetMachine(),llvm::RISCVSubtarget::is64Bit(),llvm::TargetLowering::isPositionIndependent(), andllvm::CodeModel::Small.

◆ getLegalZfaFPImm()

int RISCVTargetLowering::getLegalZfaFPImm(constAPFloatImm,
EVT VT 
) const

Definition at line2149 of fileRISCVISelLowering.cpp.

Referencesassert(), andllvm::RISCVLoadFPImm::getLoadFPImm().

Referenced byisFPImmLegal().

◆ getLMUL()

RISCVII::VLMUL RISCVTargetLowering::getLMUL(MVT VT)
static

Definition at line2359 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::details::FixedOrScalableQuantity< LeafTy, ValueTy >::getKnownMinValue(),llvm::MVT::getSizeInBits(),llvm::MVT::getVectorElementType(),llvm::MVT::isRISCVVectorTuple(),llvm::MVT::isScalableVector(),llvm_unreachable,llvm::RISCVII::LMUL_1,llvm::RISCVII::LMUL_2,llvm::RISCVII::LMUL_4,llvm::RISCVII::LMUL_8,llvm::RISCVII::LMUL_F2,llvm::RISCVII::LMUL_F4,llvm::RISCVII::LMUL_F8, andllvm::MVT::SimpleTy.

Referenced bydecomposeSubvectorInsertExtractToSubRegs(),getLMULCost(),getRegClassIDForVecVT(),getSingleShuffleSrc(),getSubregIndexByMVT(),isLegalInterleavedAccessType(),isM1OrSmaller(),lowerBUILD_VECTOR(),lowerVectorIntrinsicScalars(),RISCVTargetLowering(),llvm::RISCVDAGToDAGISel::Select(),llvm::RISCVDAGToDAGISel::selectVLSEG(),llvm::RISCVDAGToDAGISel::selectVLSEGFF(),llvm::RISCVDAGToDAGISel::selectVLXSEG(),llvm::RISCVDAGToDAGISel::selectVSSEG(), andllvm::RISCVDAGToDAGISel::selectVSXSEG().

◆ getLMULCost()

InstructionCost RISCVTargetLowering::getLMULCost(MVT VT) const

Return the cost of LMUL for linear operations.

Definition at line2826 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVVType::decodeVLMUL(),llvm::divideCeil(),llvm::RISCVSubtarget::getDLenFactor(),llvm::InstructionCost::getInvalid(),getLMUL(),llvm::RISCVSubtarget::getRealMinVLen(),llvm::MVT::getSizeInBits(),llvm::MVT::isScalableVector(), andllvm::MVT::isVector().

Referenced byllvm::RISCVTTIImpl::getInterleavedMemoryOpCost(),llvm::RISCVTTIImpl::getMemoryOpCost(),llvm::RISCVTTIImpl::getShuffleCost(),getVRGatherVICost(),getVRGatherVVCost(),getVSlideVICost(), andgetVSlideVXCost().

◆ getMaxSupportedInterleaveFactor()

unsigned llvm::RISCVTargetLowering::getMaxSupportedInterleaveFactor() const
inlineoverridevirtual

Get the maximum supported factor for interleaved memory accesses.

Default to be the minimum interleave factor: 2.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line895 of fileRISCVISelLowering.h.

◆ getNumRegisters()

unsigned RISCVTargetLowering::getNumRegisters(LLVMContextContext,
EVT VT,
std::optional<MVTRegisterVT =std::nullopt 
) const
overridevirtual

Return the number of registers for a givenMVT, for inline assembly.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2262 of fileRISCVISelLowering.cpp.

Referencesllvm::TargetLoweringBase::getNumRegisters(), andllvm::RISCVSubtarget::is64Bit().

◆ getNumRegistersForCallingConv()

unsigned RISCVTargetLowering::getNumRegistersForCallingConv(LLVMContextContext,
CallingConv::ID CC,
EVT VT 
) const
overridevirtual

Return the number of registers for a givenMVT, ensuring vectors are treated as a series of gpr sized integers.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2272 of fileRISCVISelLowering.cpp.

ReferencesCC,llvm::TargetLoweringBase::getNumRegistersForCallingConv(),llvm::RISCVSubtarget::hasStdExtFOrZfinx(), andllvm::RISCVSubtarget::hasStdExtZfhminOrZhinxmin().

◆ getOptimalMemOpType()

EVT RISCVTargetLowering::getOptimalMemOpType(constMemOpOp,
constAttributeList 
) const
overridevirtual

Returns the target specific optimal type for load and store operations as a result of memset, memcpy, and memmove lowering.

It returns EVT::Other if the type should be determined using generic target-independent logic.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22012 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVSubtarget::getELen(),llvm::MVT::getIntegerVT(),llvm::RISCVSubtarget::getRealMinVLen(),llvm::MVT::getStoreSize(),llvm::MVT::getVectorVT(),llvm::AttributeList::hasFnAttr(),llvm::RISCVSubtarget::hasVInstructions(),llvm::RISCV::RVVBitsPerBlock, andllvm::Align::value().

◆ getPostIndexedAddressParts()

bool RISCVTargetLowering::getPostIndexedAddressParts(SDNode,
SDNode,
SDValue,
SDValue,
ISD::MemIndexedMode,
SelectionDAG 
) const
overridevirtual

Returns true by value, base pointer and offset pointer and addressing mode by reference if this node can be combined with a load / store to form a post-indexed load / store.

Reimplemented fromllvm::TargetLowering.

Definition at line21821 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ADD,llvm::sampleprof::Base,getIndexedAddressParts(),llvm::RISCVSubtarget::is64Bit(),N,llvm::Offset,llvm::ISD::POST_INC, andPtr.

◆ getPreIndexedAddressParts()

bool RISCVTargetLowering::getPreIndexedAddressParts(SDNode,
SDValue,
SDValue,
ISD::MemIndexedMode,
SelectionDAG 
) const
overridevirtual

Returns true by value, base pointer and offset pointer and addressing mode by reference if the node's address can be legally represented as pre-indexed load / store address.

Reimplemented fromllvm::TargetLowering.

Definition at line21799 of fileRISCVISelLowering.cpp.

Referencesllvm::sampleprof::Base,getIndexedAddressParts(),N,llvm::Offset,llvm::ISD::PRE_INC, andPtr.

◆ getRegClassIDForLMUL()

unsigned RISCVTargetLowering::getRegClassIDForLMUL(RISCVII::VLMUL LMul)
static

Definition at line2406 of fileRISCVISelLowering.cpp.

Referencesllvm_unreachable,llvm::RISCVII::LMUL_1,llvm::RISCVII::LMUL_2,llvm::RISCVII::LMUL_4,llvm::RISCVII::LMUL_8,llvm::RISCVII::LMUL_F2,llvm::RISCVII::LMUL_F4, andllvm::RISCVII::LMUL_F8.

Referenced bygetRegClassIDForVecVT().

◆ getRegClassIDForVecVT()

unsigned RISCVTargetLowering::getRegClassIDForVecVT(MVT VT)
static

Definition at line2447 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::details::FixedOrScalableQuantity< LeafTy, ValueTy >::getKnownMinValue(),getLMUL(),getRegClassIDForLMUL(),llvm::MVT::getRISCVVectorTupleNumFields(),llvm::MVT::getSizeInBits(),llvm::MVT::getVectorElementType(),llvm::MVT::isRISCVVectorTuple(),llvm_unreachable, andllvm::RISCV::RVVBitsPerBlock.

Referenced bydecomposeSubvectorInsertExtractToSubRegs(),RISCVTargetLowering(), andllvm::RISCVDAGToDAGISel::Select().

◆ getRegForInlineAsmConstraint()

std::pair<unsigned,constTargetRegisterClass * > RISCVTargetLowering::getRegForInlineAsmConstraint(constTargetRegisterInfoTRI,
StringRef Constraint,
MVT VT 
) const
overridevirtual

Given a physical register constraint (e.g.

{edx}), return the register number and the register class for the register.

Given a register class constraint, like 'r', if this corresponds directly to an LLVM register class, return a register of 0 and the register class pointer.

This should only be used for C_Register constraints. On error, this returns a register number of 0 and a null register class pointer.

Reimplemented fromllvm::TargetLowering.

Definition at line21176 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::StringSwitch< T, R >::Case(),llvm::StringSwitch< T, R >::Cases(),llvm::StringSwitch< T, R >::Default(),llvm::TargetLowering::getRegForInlineAsmConstraint(),llvm::RISCVSubtarget::hasVInstructions(),llvm::RISCVSubtarget::is64Bit(),llvm::MVT::isVector(),llvm::StringRef::lower(),llvm::MVT::SimpleTy,llvm::StringRef::size(), andTRI.

◆ getRegisterByName()

Register RISCVTargetLowering::getRegisterByName(constcharRegName,
LLT VT,
constMachineFunctionMF 
) const
overridevirtual

Returns the register with the specified architectural or ABI name.

This method is necessary to lower the llvm.read_register.* and llvm.write_register.* intrinsics. Allocatable registers must be reserved with the clang -ffixed-xX flag for access to be allowed.

Reimplemented fromllvm::TargetLowering.

Definition at line22633 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVSubtarget::getRegisterInfo(),llvm::RISCVRegisterInfo::getReservedRegs(),llvm::RISCVSubtarget::isRegisterReservedByUser(),MatchRegisterAltName(),MatchRegisterName(),RegName,llvm::report_fatal_error(), andllvm::BitVector::test().

◆ getRegisterTypeForCallingConv()

MVT RISCVTargetLowering::getRegisterTypeForCallingConv(LLVMContextContext,
CallingConv::ID CC,
EVT VT 
) const
overridevirtual

Return the register type for a givenMVT, ensuring vectors are treated as a series of gpr sized integers.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2247 of fileRISCVISelLowering.cpp.

ReferencesCC,llvm::TargetLoweringBase::getRegisterTypeForCallingConv(),llvm::RISCVSubtarget::hasStdExtFOrZfinx(), andllvm::RISCVSubtarget::hasStdExtZfhminOrZhinxmin().

◆ getSetCCResultType()

EVT RISCVTargetLowering::getSetCCResultType(constDataLayoutDL,
LLVMContextContext,
EVT VT 
) const
overridevirtual

Return the ValueType of the result of SETCC operations.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1574 of fileRISCVISelLowering.cpp.

Referencesllvm::EVT::changeVectorElementTypeToInteger(),DL,llvm::TargetLoweringBase::getPointerTy(),llvm::EVT::getVectorElementCount(),llvm::EVT::getVectorVT(),llvm::RISCVSubtarget::hasVInstructions(),llvm::EVT::isScalableVector(),llvm::EVT::isVector(), andllvm::RISCVSubtarget::useRVVForFixedLengthVectors().

Referenced byLowerOperation().

◆ getStackProbeSize()

unsigned RISCVTargetLowering::getStackProbeSize(constMachineFunctionMF,
Align StackAlign 
) const

Definition at line22863 of fileRISCVISelLowering.cpp.

Referencesllvm::alignDown(),llvm::Function::getFnAttributeAsParsedInteger(),llvm::MachineFunction::getFunction(), andllvm::Align::value().

Referenced byemitDynamicProbedAlloc(), andemitStackProbeInline().

◆ getSubregIndexByMVT()

unsigned RISCVTargetLowering::getSubregIndexByMVT(MVT VT,
unsigned Index 
)
static

Definition at line2424 of fileRISCVISelLowering.cpp.

ReferencesgetLMUL(),llvm_unreachable,llvm::RISCVII::LMUL_1,llvm::RISCVII::LMUL_2,llvm::RISCVII::LMUL_4,llvm::RISCVII::LMUL_F2,llvm::RISCVII::LMUL_F4, andllvm::RISCVII::LMUL_F8.

Referenced bydecomposeSubvectorInsertExtractToSubRegs().

◆ getSubtarget()

constRISCVSubtarget & llvm::RISCVTargetLowering::getSubtarget() const
inline

Definition at line517 of fileRISCVISelLowering.h.

Referenced bygetContainerForFixedLengthVector(), andunpackFromRegLoc().

◆ getTargetConstantFromLoad()

constConstant * RISCVTargetLowering::getTargetConstantFromLoad(LoadSDNodeLD) const
overridevirtual

This method returns the constant pool value that will be loaded by LD.

NOTE: You must check for implicit extensions of the constant by LD.

Reimplemented fromllvm::TargetLowering.

Definition at line19134 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVISD::ADD_LO,assert(),llvm::LoadSDNode::getBasePtr(),llvm::RISCVISD::HI,llvm::ISD::isNormalLoad(),llvm::RISCVISD::LLA,llvm::RISCVII::MO_HI,llvm::RISCVII::MO_LO, andPtr.

◆ getTargetMMOFlags()[1/2]

MachineMemOperand::Flags RISCVTargetLowering::getTargetMMOFlags(constInstructionI) const
overridevirtual

This callback is used to inspect load/store instructions and add target-specificMachineMemOperand flags to them.

The default implementation does nothing.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22649 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::MDNode::getOperand(),I,llvm::MachineMemOperand::MONone,llvm::MONontemporalBit0, andllvm::MONontemporalBit1.

Referenced byareTwoSDNodeTargetMMOFlagsMergeable(), andgetTgtMemIntrinsic().

◆ getTargetMMOFlags()[2/2]

MachineMemOperand::Flags RISCVTargetLowering::getTargetMMOFlags(constMemSDNodeNode) const
overridevirtual

This callback is used to inspect load/storeSDNode.

The default implementation does nothing.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22684 of fileRISCVISelLowering.cpp.

Referencesllvm::MachineMemOperand::MONone,llvm::MONontemporalBit0, andllvm::MONontemporalBit1.

◆ getTargetNodeName()

constchar * RISCVTargetLowering::getTargetNodeName(unsigned Opcode) const
overridevirtual

This method returns the name of a target specific DAG node.

Reimplemented fromllvm::TargetLowering.

Definition at line20877 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVISD::FIRST_NUMBER, andNODE_NAME_CASE.

◆ getTgtMemIntrinsic()

bool RISCVTargetLowering::getTgtMemIntrinsic(IntrinsicInfo,
constCallInst,
MachineFunction,
unsigned  
) const
overridevirtual

Given an intrinsic, checks if on the target the intrinsic will need to map to a MemIntrinsicNode (touches memory).

If this is the case, it returns true and store the intrinsic information into the IntrinsicInfo that was passed to the function.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1621 of fileRISCVISelLowering.cpp.

ReferencesDL,llvm::Type::getContext(),llvm::Type::getIntNTy(),getName(),llvm::Type::getScalarType(),llvm::Type::getStructElementType(),getTargetMMOFlags(),llvm::TargetLoweringBase::getValueType(),I,Info,llvm::ISD::INTRINSIC_VOID,llvm::ISD::INTRINSIC_W_CHAIN,llvm::Type::isStructTy(),llvm::Type::isTargetExtTy(),llvm::MachineMemOperand::MOLoad,llvm::MachineMemOperand::MONonTemporal,llvm::MachineMemOperand::MOStore,llvm::MachineMemOperand::MOVolatile, andllvm::MemoryLocation::UnknownSize.

◆ getVectorTypeBreakdownForCallingConv()

unsigned RISCVTargetLowering::getVectorTypeBreakdownForCallingConv(LLVMContextContext,
CallingConv::ID CC,
EVT VT,
EVTIntermediateVT,
unsignedNumIntermediates,
MVTRegisterVT 
) const
overridevirtual

Certain targets such as MIPS require that some types such as vectors are always broken down into scalars in some contexts.

This occurs even if the vector type is legal.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2284 of fileRISCVISelLowering.cpp.

ReferencesCC, andllvm::TargetLoweringBase::getVectorTypeBreakdownForCallingConv().

◆ getVRGatherVICost()

InstructionCost RISCVTargetLowering::getVRGatherVICost(MVT VT) const

Return the cost of a vrgather.vi (or vx) instruction for the type VT.

vrgather.vi/vx may be linear in the number of vregs implied by LMUL, or may track the vrgather.vv cost. It is implementation-dependent.

Definition at line2859 of fileRISCVISelLowering.cpp.

ReferencesgetLMULCost().

◆ getVRGatherVVCost()

InstructionCost RISCVTargetLowering::getVRGatherVVCost(MVT VT) const

Return the cost of a vrgather.vv instruction for the type VT.

vrgather.vv is generally quadratic in the number of vreg implied by LMUL. Note that operand (index and possibly mask) are handled separately.

Definition at line2852 of fileRISCVISelLowering.cpp.

ReferencesgetLMULCost().

◆ getVSlideVICost()

InstructionCost RISCVTargetLowering::getVSlideVICost(MVT VT) const

Return the cost of a vslidedown.vi or vslideup.vi instruction for the type VT.

(This does not cover the vslide1up or vslide1down variants.) Slides may be linear in the number of vregs implied by LMUL, or may track the vrgather.vv cost. It is implementation-dependent.

Definition at line2875 of fileRISCVISelLowering.cpp.

ReferencesgetLMULCost().

◆ getVSlideVXCost()

InstructionCost RISCVTargetLowering::getVSlideVXCost(MVT VT) const

Return the cost of a vslidedown.vx or vslideup.vx instruction for the type VT.

(This does not cover the vslide1up or vslide1down variants.) Slides may be linear in the number of vregs implied by LMUL, or may track the vrgather.vv cost. It is implementation-dependent.

Definition at line2867 of fileRISCVISelLowering.cpp.

ReferencesgetLMULCost().

◆ hasAndNotCompare()

bool RISCVTargetLowering::hasAndNotCompare(SDValue Y) const
overridevirtual

Return true if the target should transform: (X & Y) == Y —> (~X & Y) == 0 (X & Y) != Y —> (~X & Y) != 0.

This may be profitable if the target has a bitwise and-not operation that sets comparison flags. A target may want to limit the transformation based on the type of Y or if Y is a constant.

Note that the transform will not occur if Y is known to be a power-of-2 because a mask and compare of a single bit can be handled by inverting the predicate, for example: (X & 8) == 8 —> (X & 8) != 0

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2027 of fileRISCVISelLowering.cpp.

Referencesllvm::EVT::isVector(), andY.

◆ hasBitTest()

bool RISCVTargetLowering::hasBitTest(SDValue X,
SDValue Y 
) const
overridevirtual

Return true if the target has a bit-test instruction: (X & (1 << Y)) ==/!= 0 This knowledge can be used to prevent breaking the pattern, or creating it if it could be recognized.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2038 of fileRISCVISelLowering.cpp.

Referencesllvm::CallingConv::C,X, andY.

◆ hasInlineStackProbe()

bool RISCVTargetLowering::hasInlineStackProbe(constMachineFunctionMF) const
overridevirtual

True if stack clash protection is enabled for this functions.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22853 of fileRISCVISelLowering.cpp.

Referencesllvm::Function::getFnAttribute(),llvm::MachineFunction::getFunction(),llvm::Attribute::getValueAsString(), andllvm::Function::hasFnAttribute().

Referenced byllvm::RISCVFrameLowering::emitPrologue().

◆ isCheapToSpeculateCtlz()

bool RISCVTargetLowering::isCheapToSpeculateCtlz(TypeTy) const
overridevirtual

Return true if it is cheap to speculate a call to intrinsic ctlz.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2006 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVSubtarget::is64Bit().

◆ isCheapToSpeculateCttz()

bool RISCVTargetLowering::isCheapToSpeculateCttz(TypeTy) const
overridevirtual

Return true if it is cheap to speculate a call to intrinsic cttz.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2001 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVSubtarget::is64Bit().

◆ isCtpopFast()

bool RISCVTargetLowering::isCtpopFast(EVT VT) const
overridevirtual

Return true if ctpop instruction is fast.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22698 of fileRISCVISelLowering.cpp.

Referencesllvm::EVT::isFixedLengthVector(),llvm::EVT::isScalableVector(), andllvm::TargetLoweringBase::isTypeLegal().

Referenced bygetCustomCtpopCost().

◆ isDesirableToCommuteWithShift()

bool RISCVTargetLowering::isDesirableToCommuteWithShift(constSDNodeN,
CombineLevel Level 
) const
overridevirtual

Return true if it is profitable to move this shift by a constant amount through its operand, adjusting any immediate operands as necessary to preserve semantics.

This transformation may not be desirable if it disrupts a particularly auspicious target-specific tree (e.g. bitfield extraction inAArch64). By default, it returns true.

Parameters
Nthe shift node
Levelthe current DAGCombine legalization level.

Reimplemented fromllvm::TargetLowering.

Definition at line18696 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ADD,assert(),llvm::RISCVMatInt::getIntMatCost(),llvm::SDValue::getNode(),llvm::SDValue::getOpcode(),llvm::SDNode::getOpcode(),llvm::SDNode::getOperand(),llvm::APInt::getSExtValue(),llvm::APInt::getSignificantBits(),llvm::EVT::getSizeInBits(),llvm::SDValue::getValueType(),llvm::SDNode::hasOneUse(),isLegalAddImmediate(),llvm::EVT::isScalarInteger(),N,llvm::ISD::OR,llvm::ISD::SELECT,llvm::ISD::SHL,llvm::ISD::SIGN_EXTEND,llvm::ISD::SRA,llvm::ISD::SRL, andX.

◆ isExtractSubvectorCheap()

bool RISCVTargetLowering::isExtractSubvectorCheap(EVT ResVT,
EVT SrcVT,
unsigned Index 
) const
overridevirtual

Return true if EXTRACT_SUBVECTOR is cheap for extracting this result type from this source type with this index.

This is needed because EXTRACT_SUBVECTOR usually has custom lowering that depends on the index of the first element, and only the target knows which lowering is cheap.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2208 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::ISD::EXTRACT_SUBVECTOR,llvm::RISCVSubtarget::getRealMinVLen(),llvm::EVT::getSizeInBits(),llvm::EVT::getVectorElementType(),llvm::EVT::getVectorNumElements(),llvm::TargetLoweringBase::isOperationLegalOrCustom(), andllvm::EVT::isScalableVector().

◆ isFMAFasterThanFMulAndFAdd()

bool RISCVTargetLowering::isFMAFasterThanFMulAndFAdd(constMachineFunctionMF,
EVT  
) const
overridevirtual

Return true if an FMA operation is faster than a pair of fmul and fadd instructions.

fmuladd intrinsics will be expanded to FMAs when this method returns true, otherwise fmuladd is expanded to fmul + fadd.

NOTE: This may be called before legalization on types for which FMAs are not legal, but should return true if those types will eventually legalize to types that support FMAs. After legalization, it will only be called on types that support FMAs (via Legal or Custom actions)

Targets that care about soft float support should return false when soft float code is being generated (i.e. use-soft-float).

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21868 of fileRISCVISelLowering.cpp.

Referencesllvm::EVT::getScalarType(),llvm::EVT::getSimpleVT(),llvm::RISCVSubtarget::hasStdExtDOrZdinx(),llvm::RISCVSubtarget::hasStdExtFOrZfinx(),llvm::RISCVSubtarget::hasStdExtZfhOrZhinx(),llvm::RISCVSubtarget::hasVInstructionsF16(),llvm::EVT::isSimple(),llvm::EVT::isVector(), andllvm::MVT::SimpleTy.

◆ isFPImmLegal()

bool RISCVTargetLowering::isFPImmLegal(constAPFloat,
EVT ,
bool ForCodeSize 
) const
overridevirtual

Returns true if the target can instruction select the specified FP immediate natively.

If false, the legalizer will materialize the FP immediate as a load from a constant pool.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2169 of fileRISCVISelLowering.cpp.

ReferencesFPImmCost,llvm::RISCVMatInt::getIntMatCost(),getLegalZfaFPImm(),llvm::EVT::getScalarSizeInBits(),llvm::RISCVSubtarget::getXLen(),llvm::RISCVSubtarget::hasStdExtDOrZdinx(),llvm::RISCVSubtarget::hasStdExtFOrZfinx(), andllvm::RISCVSubtarget::hasStdExtZfhminOrZhinxmin().

◆ isIntDivCheap()

bool RISCVTargetLowering::isIntDivCheap(EVT VT,
AttributeList Attr 
) const
overridevirtual

Return true if integer divide is usually cheaper than a sequence of several shifts, adds, and multiplies for this target.

The definition of "cheaper" may depend on whether we're optimizing for speed or for size.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22220 of fileRISCVISelLowering.cpp.

Referencesllvm::AttributeList::hasFnAttr(), andllvm::EVT::isVector().

Referenced byReplaceNodeResults().

◆ isLegalAddImmediate()

bool RISCVTargetLowering::isLegalAddImmediate(int64_t ) const
overridevirtual

Return true if the specified immediate is legal add immediate, that is the target has add instructions which can add a register with the immediate without having to materialize the immediate into a register.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1935 of fileRISCVISelLowering.cpp.

Referenced byisDesirableToCommuteWithShift().

◆ isLegalAddressingMode()

bool RISCVTargetLowering::isLegalAddressingMode(constDataLayoutDL,
constAddrModeAM,
TypeTy,
unsigned AddrSpace,
InstructionI =nullptr 
) const
overridevirtual

Return true if the addressing mode represented by AM is legal for this target, for a load/store of the specified type.

isLegalAddressingMode - Return true if the addressing mode represented by AM is legal for this target, for a load/store of the specified type.

The type may be VoidTy, in which case only return true if the addressing mode is legal for a load/store of any legal type. TODO: Handle pre/postinc as well.

If the address space cannot be determined, it will be -1.

TODO: Remove default argument

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1897 of fileRISCVISelLowering.cpp.

Referencesllvm::TargetLoweringBase::AddrMode::BaseGV,llvm::TargetLoweringBase::AddrMode::BaseOffs,llvm::TargetLoweringBase::AddrMode::HasBaseReg,llvm::RISCVSubtarget::hasVInstructions(),llvm::TargetLoweringBase::AddrMode::ScalableOffset, andllvm::TargetLoweringBase::AddrMode::Scale.

◆ isLegalElementTypeForRVV()

bool RISCVTargetLowering::isLegalElementTypeForRVV(EVT ScalarTy) const

Definition at line2550 of fileRISCVISelLowering.cpp.

Referencesllvm::EVT::getSimpleVT(),llvm::RISCVSubtarget::hasVInstructionsBF16Minimal(),llvm::RISCVSubtarget::hasVInstructionsF16Minimal(),llvm::RISCVSubtarget::hasVInstructionsF32(),llvm::RISCVSubtarget::hasVInstructionsF64(),llvm::RISCVSubtarget::hasVInstructionsI64(),llvm::RISCVSubtarget::is64Bit(),llvm::EVT::isSimple(), andllvm::MVT::SimpleTy.

Referenced byllvm::RISCVTTIImpl::isElementTypeLegalForScalableVector(),isLegalInterleavedAccessType(),llvm::RISCVTTIImpl::isLegalMaskedGatherScatter(),llvm::RISCVTTIImpl::isLegalMaskedLoadStore(),isLegalStridedLoadStore(), andllvm::RISCVTTIImpl::isLegalToVectorizeReduction().

◆ isLegalICmpImmediate()

bool RISCVTargetLowering::isLegalICmpImmediate(int64_t ) const
overridevirtual

Return true if the specified immediate is legal icmp immediate, that is the target has icmp instructions which can compare a register against the immediate without having to materialize the immediate into a register.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1931 of fileRISCVISelLowering.cpp.

◆ isLegalInterleavedAccessType()

bool RISCVTargetLowering::isLegalInterleavedAccessType(VectorTypeVTy,
unsigned Factor,
Align Alignment,
unsigned AddrSpace,
constDataLayoutDL 
) const

Returns whether or not generating a interleaved load/store intrinsic for this type will be legal.

Definition at line22268 of fileRISCVISelLowering.cpp.

Referencesllvm::TargetLoweringBase::allowsMemoryAccessForAlignment(),llvm::RISCVVType::decodeVLMUL(),DL,getContainerForFixedLengthVector(),llvm::Type::getContext(),getLMUL(),llvm::EVT::getScalarType(),llvm::EVT::getSimpleVT(),llvm::TargetLoweringBase::getValueType(),isLegalElementTypeForRVV(),llvm::TargetLoweringBase::isTypeLegal(), andllvm::RISCVSubtarget::useRVVForFixedLengthVectors().

Referenced byllvm::RISCVTTIImpl::getInterleavedMemoryOpCost(),llvm::RISCVTTIImpl::isLegalInterleavedAccessType(),lowerDeinterleaveIntrinsicToLoad(),lowerInterleavedLoad(),lowerInterleavedStore(), andlowerInterleaveIntrinsicToStore().

◆ isLegalScaleForGatherScatter()

bool llvm::RISCVTargetLowering::isLegalScaleForGatherScatter(uint64_t Scale,
uint64_t ElemSize 
) const
inlineoverridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line875 of fileRISCVISelLowering.h.

◆ isLegalStridedLoadStore()

bool RISCVTargetLowering::isLegalStridedLoadStore(EVT DataType,
Align Alignment 
) const

Return true if a stride load store of the given result type and alignment is legal.

Definition at line22306 of fileRISCVISelLowering.cpp.

Referencesllvm::EVT::getScalarType(),llvm::EVT::getStoreSize(),llvm::RISCVSubtarget::hasVInstructions(),llvm::EVT::isFixedLengthVector(),isLegalElementTypeForRVV(), andllvm::RISCVSubtarget::useRVVForFixedLengthVectors().

Referenced byllvm::RISCVTTIImpl::isLegalStridedLoadStore(), andperformCONCAT_VECTORSCombine().

◆ isMaskAndCmp0FoldingBeneficial()

bool RISCVTargetLowering::isMaskAndCmp0FoldingBeneficial(constInstructionAndI) const
overridevirtual

Return if the target supports combining a chain like:

%andResult = and %val1, #mask
%icmpResult = icmp %andResult, 0

into a single machine instruction of a form like:

cc =test %register, #mask
modulo schedule test

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2011 of fileRISCVISelLowering.cpp.

Referencesllvm::User::getOperand().

◆ isMulAddWithConstProfitable()

bool RISCVTargetLowering::isMulAddWithConstProfitable(SDValue AddNode,
SDValue ConstNode 
) const
overridevirtual

Return true if it may be profitable to transform (mul (add x, c1), c2) -> (add (mul x, c2), c1*c2).

This may not be true if c1 and c2 can be represented as immediates but c1*c2 cannot, for example. The target should check if c1, c2 and c1*c2 can be represented as immediates, or have to be materialized into registers. If it is not sure about some cases, a default true can be returned to let the DAGCombiner decide. AddNode is (add x, c1), and ConstNode is c2.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21962 of fileRISCVISelLowering.cpp.

Referencesllvm::ConstantSDNode::getAPIntValue(),llvm::SDValue::getOperand(),llvm::EVT::getScalarSizeInBits(),llvm::SDValue::getValueType(),llvm::RISCVSubtarget::getXLen(),llvm::APInt::isSignedIntN(), andllvm::EVT::isVector().

◆ isMultiStoresCheaperThanBitsMerge()

bool llvm::RISCVTargetLowering::isMultiStoresCheaperThanBitsMerge(EVT LTy,
EVT HTy 
) const
inlineoverridevirtual

Return true if it is cheaper to split the store of a merged int val from a pair of smaller values into multiple stores.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line585 of fileRISCVISelLowering.h.

Referencesllvm::EVT::isFloatingPoint(), andllvm::EVT::isInteger().

◆ isOffsetFoldingLegal()

bool RISCVTargetLowering::isOffsetFoldingLegal(constGlobalAddressSDNodeGA) const
overridevirtual

Return true if folding a constant offset with the given GlobalAddress is legal.

It is frequently not legal in PIC relocation models.

Reimplemented fromllvm::TargetLowering.

Definition at line2138 of fileRISCVISelLowering.cpp.

◆ isSExtCheaperThanZExt()

bool RISCVTargetLowering::isSExtCheaperThanZExt(EVT FromTy,
EVT ToTy 
) const
overridevirtual

Return true if sign-extension from FromTy to ToTy is cheaper than zero-extension.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1993 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVSubtarget::is64Bit().

◆ isShuffleMaskLegal()

bool RISCVTargetLowering::isShuffleMaskLegal(ArrayRef< int > M,
EVT VT 
) const
overridevirtual

Return true if the given shuffle mask can be codegen'd directly, or if it should be stack expanded.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line5751 of fileRISCVISelLowering.cpp.

Referencesllvm::MVT::getScalarType(),llvm::EVT::getSimpleVT(),isElementRotate(),isInterleaveShuffle(),llvm::ShuffleVectorSDNode::isSplatMask(), andllvm::TargetLoweringBase::isTypeLegal().

Referenced byperformVECTOR_SHUFFLECombine().

◆ isTruncateFree()[1/3]

bool RISCVTargetLowering::isTruncateFree(EVT SrcVT,
EVT DstVT 
) const
overridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1952 of fileRISCVISelLowering.cpp.

Referencesllvm::EVT::getSizeInBits(),llvm::EVT::isInteger(), andllvm::EVT::isVector().

◆ isTruncateFree()[2/3]

bool RISCVTargetLowering::isTruncateFree(SDValue Val,
EVT VT2 
) const
overridevirtual

Return true if truncating the specific node Val to type VT2 is free.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1963 of fileRISCVISelLowering.cpp.

Referencesllvm::SDValue::getOpcode(),llvm::EVT::getSizeInBits(),llvm::SDValue::getValueType(),llvm::EVT::getVectorElementType(),llvm::RISCVSubtarget::hasVInstructions(),llvm::TargetLoweringBase::isTruncateFree(),llvm::EVT::isVector(),llvm::ISD::SRA, andllvm::ISD::SRL.

◆ isTruncateFree()[3/3]

bool RISCVTargetLowering::isTruncateFree(TypeFromTy,
TypeToTy 
) const
overridevirtual

Return true if it's free to truncate a value of type FromTy to type ToTy.

e.g. On x86 it's free to truncate a i32 value in register EAX to i16 by referencing its sub-register AX. Targets must return false when FromTy <= ToTy.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1944 of fileRISCVISelLowering.cpp.

Referencesllvm::Type::getPrimitiveSizeInBits(),llvm::RISCVSubtarget::is64Bit(), andllvm::Type::isIntegerTy().

◆ isUsedByReturnOnly()

bool RISCVTargetLowering::isUsedByReturnOnly(SDNode,
SDValue 
) const
overridevirtual

Return true if result of the specified node is used by a return node only.

It also compute and return the input chain for the tail call.

This is used to determine whether it is possible to codegen a libcall as tail call at legalization time.

Reimplemented fromllvm::TargetLowering.

Definition at line20836 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::BITCAST,llvm::ISD::CopyToReg,isUsedByReturnOnly(),N, andllvm::RISCVISD::RET_GLUE.

Referenced byisUsedByReturnOnly().

◆ isVScaleKnownToBeAPowerOfTwo()

bool RISCVTargetLowering::isVScaleKnownToBeAPowerOfTwo() const
overridevirtual

Return true only if vscale must be a power of two.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21751 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::RISCVSubtarget::getRealMinVLen(), andllvm::RISCV::RVVBitsPerBlock.

Referenced byllvm::RISCVTTIImpl::isVScaleKnownToBeAPowerOfTwo().

◆ isZExtFree()

bool RISCVTargetLowering::isZExtFree(SDValue Val,
EVT VT2 
) const
overridevirtual

Return true if zero-extending the specific node Val to type VT2 is free (either because it's implicitly zero-extended such asARM ldrb / ldrh or because it's folded such asX86 zero-extending loads).

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1978 of fileRISCVISelLowering.cpp.

Referencesllvm::TargetLoweringBase::isZExtFree(),llvm::ISD::NON_EXTLOAD, andllvm::ISD::ZEXTLOAD.

◆ joinRegisterPartsIntoValue()

SDValue RISCVTargetLowering::joinRegisterPartsIntoValue(SelectionDAGDAG,
constSDLocDL,
constSDValueParts,
unsigned NumParts,
MVT PartVT,
EVT ValueVT,
std::optional<CallingConv::IDCC 
) const
overridevirtual

Target-specific combining of register parts into its original value.

Reimplemented fromllvm::TargetLowering.

Definition at line22153 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::ISD::BITCAST,llvm::ISD::BUILD_PAIR,CC,DL,llvm::ISD::EXTRACT_SUBVECTOR,llvm::SelectionDAG::getBitcast(),llvm::SelectionDAG::getContext(),llvm::EVT::getFixedSizeInBits(),llvm::details::FixedOrScalableQuantity< LeafTy, ValueTy >::getKnownMinValue(),llvm::SelectionDAG::getNode(),llvm::EVT::getSizeInBits(),llvm::MVT::getSizeInBits(),llvm::SDValue::getValue(),llvm::EVT::getVectorElementType(),llvm::MVT::getVectorElementType(),llvm::SelectionDAG::getVectorIdxConstant(),llvm::EVT::getVectorVT(),llvm::SelectionDAG::getVTList(),llvm::RISCVSubtarget::getXLenVT(),llvm::RISCVSubtarget::is64Bit(),llvm::EVT::isScalableVector(),llvm::MVT::isScalableVector(),llvm::RISCVISD::SplitGPRPair, andllvm::ISD::TRUNCATE.

◆ LowerAsmOperandForConstraint()

void RISCVTargetLowering::LowerAsmOperandForConstraint(SDValue Op,
StringRef Constraint,
std::vector<SDValue > & Ops,
SelectionDAGDAG 
) const
overridevirtual

Lower the specified operand into the Ops vector.

If it is invalid, don't add anything to Ops.

Reimplemented fromllvm::TargetLowering.

Definition at line21458 of fileRISCVISelLowering.cpp.

Referencesllvm::CallingConv::C,llvm::SelectionDAG::getSignedTargetConstant(),llvm::SelectionDAG::getTargetConstant(),llvm::RISCVSubtarget::getXLenVT(),llvm::isNullConstant(),llvm::TargetLowering::LowerAsmOperandForConstraint(), andllvm::StringRef::size().

◆ LowerCall()

SDValue RISCVTargetLowering::LowerCall(TargetLowering::CallLoweringInfo,
SmallVectorImpl<SDValue > &  
) const
overridevirtual

This hook must be implemented to lower calls into the specified DAG.

The outgoing arguments to the call are described by the Outs array, and the values to be returned by the call are described by the Ins array. The implementation should fill in the InVals array with legal-type return values from the call, and return the resulting token chain value.

Reimplemented fromllvm::TargetLowering.

Definition at line20383 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ADD,llvm::SelectionDAG::addNoMergeSiteInfo(),llvm::Address,llvm::CCState::AnalyzeCallOperands(),assert(),llvm::RISCVISD::BuildPairF64,llvm::RISCVISD::CALL,llvm::TargetLowering::CallLoweringInfo::CallConv,llvm::TargetLowering::CallLoweringInfo::Callee,llvm::TargetLowering::CallLoweringInfo::CB,llvm::CC_RISCV(),llvm::CC_RISCV_FastCC(),llvm::CC_RISCV_GHC(),llvm::TargetLowering::CallLoweringInfo::CFIType,llvm::TargetLowering::CallLoweringInfo::Chain,convertLocVTToValVT(),convertValVTToLocVT(),llvm::MachineFrameInfo::CreateStackObject(),llvm::SelectionDAG::CreateStackTemporary(),llvm::TargetLowering::CallLoweringInfo::DAG,llvm::LLVMContext::diagnose(),llvm::TargetLowering::CallLoweringInfo::DL,DL,llvm::SmallVectorBase< Size_T >::empty(),llvm::CallingConv::Fast,llvm::SelectionDAG::getCALLSEQ_END(),llvm::SelectionDAG::getCALLSEQ_START(),getCodeModel(),llvm::SelectionDAG::getConstant(),llvm::SelectionDAG::getContext(),llvm::Function::getContext(),llvm::SelectionDAG::getCopyFromReg(),llvm::SelectionDAG::getCopyToReg(),llvm::SelectionDAG::getDataLayout(),llvm::MachinePointerInfo::getFixedStack(),llvm::SelectionDAG::getFrameIndex(),llvm::MachineFunction::getFrameInfo(),llvm::MachineFunction::getFunction(),llvm::SelectionDAG::getIntPtrConstant(),getLargeExternalSymbol(),getLargeGlobalAddress(),llvm::CCValAssign::getLocInfo(),llvm::CCValAssign::getLocMemOffset(),llvm::CCValAssign::getLocReg(),llvm::CCValAssign::getLocVT(),llvm::SelectionDAG::getMachineFunction(),llvm::SelectionDAG::getMemcpy(),llvm::SDValue::getNode(),llvm::SelectionDAG::getNode(),llvm::TargetLoweringBase::getPointerTy(),getPrefTypeAlign(),llvm::SelectionDAG::getRegister(),llvm::RISCVSubtarget::getRegisterInfo(),llvm::SelectionDAG::getRegisterMask(),llvm::MachinePointerInfo::getStack(),llvm::CCState::getStackSize(),llvm::SelectionDAG::getStore(),llvm::EVT::getStoreSize(),llvm::MachineFunction::getSubtarget(),llvm::SelectionDAG::getTargetExternalSymbol(),llvm::SelectionDAG::getTargetGlobalAddress(),llvm::TargetLoweringBase::getTargetMachine(),llvm::SDValue::getValue(),llvm::SDValue::getValueType(),llvm::CCValAssign::getValVT(),llvm::SelectionDAG::getVTList(),llvm::RISCVSubtarget::getXLenVT(),llvm::ConstantInt::getZExtValue(),llvm::CallingConv::GHC,llvm::Hi,llvm::CCValAssign::Indirect,llvm::TargetLowering::CallLoweringInfo::Ins,llvm::CallBase::isIndirectCall(),llvm::CCValAssign::isMemLoc(),llvm::CallBase::isMustTailCall(),llvm::TargetSubtargetInfo::isRegisterReservedByUser(),llvm::CCValAssign::isRegLoc(),llvm::EVT::isScalableVector(),llvm::TargetLowering::CallLoweringInfo::IsTailCall,llvm::TargetLowering::CallLoweringInfo::IsVarArg,llvm::MVT::isVector(),llvm::CodeModel::Large,llvm::Lo,llvm::RISCVII::MO_CALL,llvm::CCValAssign::needsCustom(),llvm::TargetLowering::CallLoweringInfo::NoMerge,llvm::Offset,llvm::TargetLowering::CallLoweringInfo::Outs,llvm::TargetLowering::CallLoweringInfo::OutVals,llvm::SmallVectorTemplateBase< T, bool >::push_back(),llvm::report_fatal_error(),llvm::SDNode::setCFIType(),llvm::MachineFrameInfo::setHasTailCall(),llvm::SmallVectorBase< Size_T >::size(),Size,llvm::RISCVISD::SplitF64,llvm::RISCVISD::SW_GUARDED_CALL,llvm::RISCVISD::SW_GUARDED_TAIL,llvm::RISCVISD::TAIL,llvm::ISD::TokenFactor,TRI, andllvm::ISD::VSCALE.

◆ LowerCustomJumpTableEntry()

constMCExpr * RISCVTargetLowering::LowerCustomJumpTableEntry(constMachineJumpTableInfoMJTI,
constMachineBasicBlockMBB,
unsigned uid,
MCContextCtx 
) const
overridevirtual

Reimplemented fromllvm::TargetLowering.

Definition at line21743 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::MCSymbolRefExpr::create(),getCodeModel(),llvm::MachineBasicBlock::getSymbol(),llvm::TargetLoweringBase::getTargetMachine(),llvm::RISCVSubtarget::is64Bit(),llvm::TargetLowering::isPositionIndependent(),MBB, andllvm::CodeModel::Small.

◆ lowerDeinterleaveIntrinsicToLoad()

bool RISCVTargetLowering::lowerDeinterleaveIntrinsicToLoad(LoadInstLI,
ArrayRef<Value * > DeinterleaveValues 
) const
overridevirtual

Lower a deinterleave intrinsic to a target specific load intrinsic.

Return true on success. Currently only supports llvm.vector.deinterleave2

LI is the accompanying load instruction.DeinterleaveValues contains the deinterleaved values.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22480 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::IRBuilderBase::CreateExtractValue(),llvm::IRBuilderBase::CreateInsertValue(),llvm::IRBuilderBase::CreateIntrinsic(),DL,llvm::enumerate(),FixedVlsegIntrIds,llvm::StructType::get(),llvm::TargetExtType::get(),llvm::ScalableVectorType::get(),llvm::PoisonValue::get(),llvm::LoadInst::getAlign(),llvm::Constant::getAllOnesValue(),llvm::Value::getContext(),llvm::Instruction::getDataLayout(),llvm::IRBuilderBase::getInt32(),llvm::Type::getInt8Ty(),llvm::Type::getIntNTy(),llvm::LoadInst::getPointerAddressSpace(),llvm::LoadInst::getPointerOperand(),llvm::LoadInst::getPointerOperandType(),getType(),llvm::RISCVSubtarget::getXLen(),Idx,isLegalInterleavedAccessType(),llvm::LoadInst::isSimple(),llvm::Log2_64(),llvm::Value::replaceAllUsesWith(), andllvm::ArrayRef< T >::size().

◆ LowerFormalArguments()

SDValue RISCVTargetLowering::LowerFormalArguments(SDValue ,
CallingConv::ID ,
bool ,
constSmallVectorImpl<ISD::InputArg > & ,
constSDLoc,
SelectionDAG,
SmallVectorImpl<SDValue > &  
) const
overridevirtual

This hook must be implemented to lower the incoming (formal) arguments, described by the Ins array, into the specified DAG.

The implementation should fill in the InVals array with legal-type argument values, and return the resulting token chain value.

Reimplemented fromllvm::TargetLowering.

Definition at line20150 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ADD,llvm::MachineRegisterInfo::addLiveIn(),llvm::Address,llvm::CCState::AnalyzeFormalArguments(),llvm::any_of(),assert(),llvm::CallingConv::C,llvm::CC_RISCV(),llvm::CC_RISCV_FastCC(),llvm::CC_RISCV_GHC(),llvm::MachineFrameInfo::CreateFixedObject(),llvm::MachineRegisterInfo::createVirtualRegister(),DL,llvm::CallingConv::Fast,llvm::RISCV::getArgGPRs(),llvm::SelectionDAG::getContext(),llvm::SelectionDAG::getCopyFromReg(),llvm::SelectionDAG::getDataLayout(),llvm::CCState::getFirstUnallocated(),llvm::TypeSize::getFixed(),llvm::MachinePointerInfo::getFixedStack(),llvm::Function::getFnAttribute(),llvm::SelectionDAG::getFrameIndex(),llvm::MachineFunction::getFrameInfo(),llvm::MachineFunction::getFunction(),llvm::MachineFunction::getInfo(),llvm::SelectionDAG::getIntPtrConstant(),llvm::SelectionDAG::getLoad(),llvm::CCValAssign::getLocInfo(),llvm::CCValAssign::getLocVT(),llvm::SelectionDAG::getMachineFunction(),llvm::SelectionDAG::getMemBasePlusOffset(),llvm::SelectionDAG::getNode(),llvm::TargetLoweringBase::getPointerTy(),llvm::MachineFunction::getRegInfo(),llvm::CCState::getStackSize(),llvm::SelectionDAG::getStore(),llvm::RISCVSubtarget::getTargetABI(),llvm::Attribute::getValueAsString(),llvm::CCValAssign::getValVT(),llvm::RISCVSubtarget::getXLen(),llvm::RISCVSubtarget::getXLenVT(),llvm::CallingConv::GHC,llvm::CallingConv::GRAAL,llvm::RISCVSubtarget::hasStdExtDOrZdinx(),llvm::RISCVSubtarget::hasStdExtFOrZfinx(),I,Idx,llvm::CCValAssign::Indirect,llvm::CCValAssign::isRegLoc(),llvm::MVT::isScalableVector(),llvm::MVT::isVector(),llvm::CCValAssign::needsCustom(),llvm::Offset,llvm::SmallVectorTemplateBase< T, bool >::push_back(),llvm::report_fatal_error(),llvm::CallingConv::RISCV_VectorCall,llvm::RISCVMachineFunctionInfo::setVarArgsFrameIndex(),llvm::RISCVMachineFunctionInfo::setVarArgsSaveSize(),llvm::ArrayRef< T >::size(),llvm::SmallVectorBase< Size_T >::size(),llvm::CallingConv::SPIR_KERNEL,llvm::ISD::TokenFactor,unpackF64OnRV32DSoftABI(),unpackFromMemLoc(),unpackFromRegLoc(), andllvm::ISD::VSCALE.

◆ lowerInterleavedLoad()

bool RISCVTargetLowering::lowerInterleavedLoad(LoadInstLI,
ArrayRef<ShuffleVectorInst * > Shuffles,
ArrayRef<unsignedIndices,
unsigned Factor 
) const
overridevirtual

Lower an interleaved load into a vlsegN intrinsic.

E.g. Lower an interleaved load (Factor = 2): wide.vec = load <8 x i32>, <8 x i32>* ptr v0 = shuffle wide.vec, undef, <0, 2, 4, 6> ; Extract even elements v1 = shuffle wide.vec, undef, <1, 3, 5, 7> ; Extract odd elements

Into: ld2 = { <4 x i32>, <4 x i32> } call llvm.riscv.seg2.load.v4i32.p0.i64( ptr, i64 4) vec0 = extractelement { <4 x i32>, <4 x i32> } ld2, i32 0 vec1 = extractelement { <4 x i32>, <4 x i32> } ld2, i32 1

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22344 of fileRISCVISelLowering.cpp.

Referencesllvm::CallBase::addParamAttr(),assert(),llvm::IRBuilderBase::CreateExtractValue(),llvm::IRBuilderBase::CreateIntrinsic(),llvm::IRBuilderBase::CreatePtrAdd(),FixedVlsegIntrIds,llvm::LoadInst::getAlign(),llvm::IRBuilderBase::getAllOnesMask(),llvm::Value::getContext(),llvm::Instruction::getDataLayout(),llvm::IRBuilderBase::getInt32(),llvm::Type::getIntNTy(),llvm::LoadInst::getPointerAddressSpace(),llvm::LoadInst::getPointerOperand(),llvm::LoadInst::getPointerOperandType(),llvm::Value::getType(),getType(),llvm::Attribute::getWithAlignment(),llvm::RISCVSubtarget::getXLen(),llvm::RISCVSubtarget::hasOptimizedSegmentLoadStore(),isLegalInterleavedAccessType(),llvm::Offset, andllvm::ArrayRef< T >::size().

◆ lowerInterleavedStore()

bool RISCVTargetLowering::lowerInterleavedStore(StoreInstSI,
ShuffleVectorInstSVI,
unsigned Factor 
) const
overridevirtual

Lower an interleaved store into a vssegN intrinsic.

E.g. Lower an interleaved store (Factor = 3): i.vec = shuffle <8 x i32> v0, <8 x i32> v1, <0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11> store <12 x i32> i.vec, <12 x i32>* ptr

Into: sub.v0 = shuffle <8 x i32> v0, <8 x i32> v1, <0, 1, 2, 3> sub.v1 = shuffle <8 x i32> v0, <8 x i32> v1, <4, 5, 6, 7> sub.v2 = shuffle <8 x i32> v0, <8 x i32> v1, <8, 9, 10, 11> call void llvm.riscv.seg3.store.v4i32.p0.i64(sub.v0, sub.v1, sub.v2, ptr, i32 4)

Note that the new shufflevectors will be removed and we'll only generate one vsseg3 instruction in CodeGen.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22416 of fileRISCVISelLowering.cpp.

Referencesllvm::CallBase::addParamAttr(),llvm::SmallVectorImpl< T >::append(),llvm::IRBuilderBase::CreateCall(),llvm::IRBuilderBase::CreateIntrinsic(),llvm::IRBuilderBase::CreatePtrAdd(),llvm::createSequentialMask(),llvm::IRBuilderBase::CreateShuffleVector(),llvm::Data,FixedVssegIntrIds,llvm::FixedVectorType::get(),llvm::IRBuilderBase::getAllOnesMask(),llvm::Value::getContext(),llvm::IRBuilderBase::getInt32(),llvm::Type::getIntNTy(),llvm::User::getOperand(),llvm::Intrinsic::getOrInsertDeclaration(),llvm::ShuffleVectorInst::getShuffleMask(),llvm::ShuffleVectorInst::getType(),llvm::Value::getType(),llvm::Attribute::getWithAlignment(),llvm::RISCVSubtarget::getXLen(),llvm::RISCVSubtarget::hasOptimizedSegmentLoadStore(),isLegalInterleavedAccessType(),isSpreadMask(),llvm::Offset, andllvm::SmallVectorTemplateBase< T, bool >::push_back().

◆ lowerInterleaveIntrinsicToStore()

bool RISCVTargetLowering::lowerInterleaveIntrinsicToStore(StoreInstSI,
ArrayRef<Value * > InterleaveValues 
) const
overridevirtual

Lower an interleave intrinsic to a target specific store intrinsic.

Return true on success. Currently only supports llvm.vector.interleave2

SI is the accompanying store instructionInterleaveValues contains the interleaved values.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22549 of fileRISCVISelLowering.cpp.

Referencesllvm::SmallVectorImpl< T >::append(),assert(),llvm::ArrayRef< T >::begin(),llvm::IRBuilderBase::CreateCall(),llvm::IRBuilderBase::CreateIntrinsic(),DL,llvm::ArrayRef< T >::end(),FixedVssegIntrIds,llvm::TargetExtType::get(),llvm::ScalableVectorType::get(),llvm::PoisonValue::get(),llvm::Constant::getAllOnesValue(),llvm::Type::getInt8Ty(),llvm::Type::getIntNTy(),llvm::Intrinsic::getOrInsertDeclaration(),getType(),llvm::RISCVSubtarget::getXLen(),isLegalInterleavedAccessType(),llvm::Log2_64(), andllvm::ArrayRef< T >::size().

◆ LowerOperation()

SDValue RISCVTargetLowering::LowerOperation(SDValue Op,
SelectionDAGDAG 
) const
overridevirtual

This callback is invoked for operations that are unsupported by the target, which are registered to use 'custom' lowering, and whose defined values are all legal.

If the target has no operations that require custom lowering, it need not implement this. The default implementation of this aborts.

Reimplemented fromllvm::TargetLowering.

Definition at line6662 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ABDS,llvm::ISD::ABDU,llvm::ISD::ABS,llvm::ISD::ADD,llvm::ISD::ADJUST_TRAMPOLINE,llvm::ISD::AND,llvm::ISD::ANY_EXTEND,assert(),llvm::ISD::ATOMIC_FENCE,llvm::ISD::AVGCEILS,llvm::ISD::AVGCEILU,llvm::ISD::AVGFLOORS,llvm::ISD::AVGFLOORU,llvm::ISD::BF16_TO_FP,llvm::ISD::BITCAST,llvm::ISD::BITREVERSE,llvm::MVT::bitsGE(),llvm::ISD::BlockAddress,llvm::ISD::BRCOND,llvm::RISCVISD::BREV8,llvm::ISD::BSWAP,llvm::ISD::BUILD_VECTOR,llvm::RISCVISD::BuildPairF64,CC,llvm::MVT::changeVectorElementType(),llvm::ISD::CLEAR_CACHE,llvm::ISD::CONCAT_VECTORS,Cond,llvm::ISD::Constant,llvm::ISD::ConstantFP,llvm::ISD::ConstantPool,convertFromScalableVector(),convertToScalableVector(),llvm::ISD::CTLZ,llvm::ISD::CTLZ_ZERO_UNDEF,llvm::ISD::CTPOP,llvm::ISD::CTTZ,llvm::ISD::CTTZ_ZERO_UNDEF,DL,llvm::ISD::DYNAMIC_STACKALLOC,llvm::ISD::EH_DWARF_CFA,llvm::enumerate(),llvm::ISD::EXTRACT_SUBVECTOR,llvm::ISD::EXTRACT_VECTOR_ELT,llvm::ISD::FABS,llvm::ISD::FADD,llvm::ISD::FCEIL,llvm::ISD::FCOPYSIGN,llvm::ISD::FDIV,llvm::ISD::FFLOOR,llvm::ISD::FMA,llvm::ISD::FMAXIMUM,llvm::ISD::FMAXNUM,llvm::ISD::FMINIMUM,llvm::ISD::FMINNUM,llvm::ISD::FMUL,llvm::RISCVISD::FMV_H_X,llvm::RISCVISD::FMV_W_X_RV64,llvm::RISCVISD::FMV_X_ANYEXTH,llvm::RISCVISD::FMV_X_ANYEXTW_RV64,llvm::ISD::FNEARBYINT,llvm::ISD::FNEG,llvm::ISD::FP16_TO_FP,llvm::ISD::FP_EXTEND,llvm::ISD::FP_ROUND,llvm::ISD::FP_TO_BF16,llvm::ISD::FP_TO_FP16,llvm::ISD::FP_TO_SINT,llvm::ISD::FP_TO_SINT_SAT,llvm::ISD::FP_TO_UINT,llvm::ISD::FP_TO_UINT_SAT,llvm::ISD::FPOWI,llvm::ISD::FRAMEADDR,llvm::ISD::FRINT,llvm::ISD::FROUND,llvm::ISD::FROUNDEVEN,llvm::ISD::FSQRT,llvm::ISD::FSUB,llvm::ISD::FTRUNC,llvm::ISD::GET_ROUNDING,llvm::SelectionDAG::getBitcast(),llvm::SelectionDAG::getConstant(),getContainerForFixedLengthVector(),llvm::SelectionDAG::getContext(),llvm::SelectionDAG::getDataLayout(),getDefaultVLOps(),llvm::SelectionDAG::getFPExtendOrRound(),llvm::RTLIB::getFPROUND(),llvm::SelectionDAG::getFreeze(),llvm::MVT::getHalfNumVectorElementsVT(),llvm::MVT::getIntegerVT(),llvm::SelectionDAG::getIntPtrConstant(),llvm::details::FixedOrScalableQuantity< LeafTy, ValueTy >::getKnownMinValue(),getLMUL1VT(),llvm::SelectionDAG::getLoad(),llvm::SelectionDAG::getLogicalNOT(),llvm::SelectionDAG::getMergeValues(),llvm::SelectionDAG::getNode(),llvm::DWARFExpression::Operation::getNumOperands(),llvm::RISCVSubtarget::getRealMinVLen(),llvm::EVT::getRISCVVectorTupleNumFields(),llvm::MVT::getScalableVectorVT(),llvm::SelectionDAG::getSelect(),llvm::SelectionDAG::getSetCC(),getSetCCResultType(),llvm::ISD::getSetCCSwappedOperands(),llvm::SelectionDAG::getShiftAmountConstant(),llvm::SelectionDAG::getSignedConstant(),llvm::EVT::getSizeInBits(),llvm::MVT::getSizeInBits(),llvm::SelectionDAG::getStore(),llvm::SelectionDAG::getStrictFPExtendOrRound(),llvm::TargetLoweringBase::getTargetMachine(),llvm::SelectionDAG::getUNDEF(),llvm::SDValue::getValue(),llvm::SDValue::getValueType(),llvm::MVT::getVectorElementCount(),llvm::MVT::getVectorElementType(),llvm::SelectionDAG::getVectorIdxConstant(),llvm::EVT::getVectorVT(),llvm::MVT::getVectorVT(),llvm::SelectionDAG::getVTList(),llvm::RISCVSubtarget::getXLenVT(),llvm::ISD::GlobalAddress,llvm::ISD::GlobalTLSAddress,llvm::RISCVSubtarget::hasStdExtDOrZdinx(),llvm::RISCVSubtarget::hasStdExtFOrZfinx(),llvm::RISCVSubtarget::hasStdExtZfhminOrZhinxmin(),llvm::RISCVSubtarget::hasVInstructionsF16(),llvm::RISCVSubtarget::hasVInstructionsF16Minimal(),llvm::Hi,llvm::ISD::INIT_TRAMPOLINE,llvm::ISD::INSERT_SUBVECTOR,llvm::ISD::INSERT_VECTOR_ELT,llvm::ISD::INTRINSIC_VOID,llvm::ISD::INTRINSIC_W_CHAIN,llvm::ISD::INTRINSIC_WO_CHAIN,llvm::RISCVSubtarget::is64Bit(),llvm::ISD::IS_FPCLASS,llvm::EVT::isFixedLengthVector(),llvm::MVT::isFixedLengthVector(),llvm::MVT::isFloatingPoint(),llvm::MVT::isInteger(),llvm::isPowerOf2_32(),llvm::isPowerOf2_64(),isPromotedOpNeedingSplit(),llvm::EVT::isRISCVVectorTuple(),llvm::EVT::isScalableVector(),llvm::MVT::isScalarInteger(),llvm::RISCVSubtarget::isSoftFPABI(),llvm::TargetLoweringBase::isTypeLegal(),llvm::SDValue::isUndef(),llvm::EVT::isVector(),llvm::MVT::isVector(),llvm::ISD::JumpTable,LHS,llvm::ISD::LLRINT,llvm::ISD::LLROUND,llvm_unreachable,llvm::Lo,llvm::ISD::LOAD,llvm::Log2(),llvm::Log2_64(),LowerATOMIC_FENCE(),lowerBUILD_VECTOR(),lowerConstant(),lowerFABSorFNEG(),lowerFCOPYSIGN(),lowerFMAXIMUM_FMINIMUM(),lowerFP_TO_INT(),lowerFP_TO_INT_SAT(),lowerFTRUNC_FCEIL_FFLOOR_FROUND(),lowerINT_TO_FP(),lowerVECTOR_SHUFFLE(),lowerVectorFTRUNC_FCEIL_FFLOOR_FROUND(),lowerVectorStrictFTRUNC_FCEIL_FFLOOR_FROUND(),lowerVectorXRINT(),llvm::ISD::LRINT,llvm::ISD::LROUND,llvm::TargetLowering::makeLibCall(),llvm::ISD::MGATHER,llvm::ISD::MLOAD,llvm::ISD::MSCATTER,llvm::ISD::MSTORE,llvm::ISD::MUL,llvm::ISD::MULHS,llvm::ISD::MULHU,NC,llvm::ISD::OR,llvm::SmallVectorTemplateBase< T, bool >::push_back(),llvm::RISCVISD::READ_VLENB,llvm::report_fatal_error(),llvm::ISD::RETURNADDR,RHS,llvm::ISD::ROTL,llvm::ISD::ROTR,llvm::RISCV::RVVBitsPerBlock,llvm::ISD::SADDSAT,llvm::ISD::SCALAR_TO_VECTOR,llvm::ISD::SDIV,llvm::ISD::SELECT,llvm::ISD::SELECT_CC,llvm::ISD::SET_ROUNDING,llvm::ISD::SETCC,llvm::ISD::SETGT,llvm::ISD::SETUGT,llvm::ISD::SHL,llvm::ISD::SHL_PARTS,llvm::ISD::SIGN_EXTEND,llvm::ISD::SINT_TO_FP,llvm::RISCVISD::SINT_TO_FP_VL,llvm::ISD::SMAX,llvm::ISD::SMIN,llvm::ISD::SPLAT_VECTOR,llvm::ISD::SPLAT_VECTOR_PARTS,llvm::SelectionDAG::SplitScalar(),SplitStrictFPVectorOp(),SplitVectorOp(),SplitVectorReductionOp(),SplitVPOp(),llvm::ISD::SRA,llvm::ISD::SRA_PARTS,llvm::ISD::SREM,llvm::ISD::SRL,llvm::ISD::SRL_PARTS,llvm::ISD::SSUBSAT,llvm::ISD::STEP_VECTOR,llvm::ISD::STORE,llvm::ISD::STRICT_FADD,llvm::ISD::STRICT_FCEIL,llvm::ISD::STRICT_FDIV,llvm::ISD::STRICT_FFLOOR,llvm::ISD::STRICT_FMA,llvm::ISD::STRICT_FMUL,llvm::ISD::STRICT_FNEARBYINT,llvm::ISD::STRICT_FP16_TO_FP,llvm::ISD::STRICT_FP_EXTEND,llvm::ISD::STRICT_FP_ROUND,llvm::ISD::STRICT_FP_TO_FP16,llvm::ISD::STRICT_FP_TO_SINT,llvm::ISD::STRICT_FP_TO_UINT,llvm::ISD::STRICT_FRINT,llvm::ISD::STRICT_FROUND,llvm::ISD::STRICT_FROUNDEVEN,llvm::ISD::STRICT_FSETCC,llvm::ISD::STRICT_FSETCCS,llvm::ISD::STRICT_FSQRT,llvm::ISD::STRICT_FSUB,llvm::ISD::STRICT_FTRUNC,llvm::ISD::STRICT_LLRINT,llvm::ISD::STRICT_LLROUND,llvm::ISD::STRICT_LRINT,llvm::ISD::STRICT_LROUND,llvm::ISD::STRICT_SINT_TO_FP,llvm::RISCVISD::STRICT_SINT_TO_FP_VL,llvm::ISD::STRICT_UINT_TO_FP,llvm::RISCVISD::STRICT_UINT_TO_FP_VL,llvm::RISCVISD::STRICT_VFCVT_RTZ_X_F_VL,llvm::RISCVISD::STRICT_VFCVT_RTZ_XU_F_VL,llvm::ISD::SUB,llvm::ISD::TokenFactor,llvm::ISD::TRUNCATE,llvm::ISD::TRUNCATE_SSAT_S,llvm::ISD::TRUNCATE_USAT_U,llvm::RISCVISD::TUPLE_EXTRACT,llvm::RISCVISD::TUPLE_INSERT,llvm::ISD::UADDSAT,llvm::ISD::UDIV,llvm::ISD::UINT_TO_FP,llvm::RISCVISD::UINT_TO_FP_VL,llvm::ISD::UMAX,llvm::ISD::UMIN,llvm::ISD::UNDEF,llvm::ISD::UREM,llvm::ISD::USUBSAT,llvm::ISD::VASTART,llvm::ISD::VECREDUCE_ADD,llvm::ISD::VECREDUCE_AND,llvm::ISD::VECREDUCE_FADD,llvm::ISD::VECREDUCE_FMAX,llvm::ISD::VECREDUCE_FMAXIMUM,llvm::ISD::VECREDUCE_FMIN,llvm::ISD::VECREDUCE_FMINIMUM,llvm::ISD::VECREDUCE_OR,llvm::ISD::VECREDUCE_SEQ_FADD,llvm::ISD::VECREDUCE_SMAX,llvm::ISD::VECREDUCE_SMIN,llvm::ISD::VECREDUCE_UMAX,llvm::ISD::VECREDUCE_UMIN,llvm::ISD::VECREDUCE_XOR,llvm::ISD::VECTOR_COMPRESS,llvm::ISD::VECTOR_DEINTERLEAVE,llvm::ISD::VECTOR_INTERLEAVE,llvm::ISD::VECTOR_REVERSE,llvm::ISD::VECTOR_SHUFFLE,llvm::ISD::VECTOR_SPLICE,llvm::RISCVISD::VFCVT_RTZ_X_F_VL,llvm::RISCVISD::VFCVT_RTZ_XU_F_VL,llvm::RISCVISD::VFMV_S_F_VL,llvm::RISCVISD::VMV_S_X_VL,llvm::ISD::VSCALE,llvm::ISD::VSELECT,llvm::RISCVISD::VSEXT_VL,llvm::RISCVISD::VZEXT_VL,llvm::ISD::XOR, andllvm::ISD::ZERO_EXTEND.

◆ LowerReturn()

SDValue RISCVTargetLowering::LowerReturn(SDValue ,
CallingConv::ID ,
bool ,
constSmallVectorImpl<ISD::OutputArg > & ,
constSmallVectorImpl<SDValue > & ,
constSDLoc,
SelectionDAG 
) const
overridevirtual

This hook must be implemented to lower outgoing return values, described by the Outs array, into the specified DAG.

The implementation should return the resulting token chain value.

Reimplemented fromllvm::TargetLowering.

Definition at line20720 of fileRISCVISelLowering.cpp.

Referencesllvm::any_of(),assert(),llvm::CC_RISCV(),convertValVTToLocVT(),llvm::LLVMContext::diagnose(),DL,llvm::SelectionDAG::getContext(),llvm::Function::getContext(),llvm::SelectionDAG::getCopyToReg(),llvm::Function::getFnAttribute(),llvm::MachineFunction::getFunction(),llvm::MachineFunction::getInfo(),llvm::CCValAssign::getLocReg(),llvm::CCValAssign::getLocVT(),llvm::SelectionDAG::getMachineFunction(),llvm::SDValue::getNode(),llvm::SelectionDAG::getNode(),llvm::SelectionDAG::getRegister(),llvm::MachineFunction::getSubtarget(),llvm::SDValue::getValue(),llvm::Attribute::getValueAsString(),llvm::CCValAssign::getValVT(),llvm::SelectionDAG::getVTList(),llvm::CallingConv::GHC,llvm::Hi,llvm::RISCVSubtarget::isRegisterReservedByUser(),llvm::CCValAssign::isRegLoc(),llvm::Lo,llvm::RISCVISD::MRET_GLUE,llvm::CCValAssign::needsCustom(),llvm::SmallVectorTemplateBase< T, bool >::push_back(),llvm::report_fatal_error(),llvm::RISCVISD::RET_GLUE,llvm::RISCVISD::SplitF64, andllvm::RISCVISD::SRET_GLUE.

◆ mayBeEmittedAsTailCall()

bool RISCVTargetLowering::mayBeEmittedAsTailCall(constCallInst) const
overridevirtual

Return true if the target may be able emit the call instruction as a tail call.

This is used by optimization passes to determine if it's profitable to duplicate return instructions to enable tailcall optimization.

Reimplemented fromllvm::TargetLowering.

Definition at line20873 of fileRISCVISelLowering.cpp.

Referencesllvm::CallInst::isTailCall().

◆ PerformDAGCombine()

SDValue RISCVTargetLowering::PerformDAGCombine(SDNodeN,
DAGCombinerInfoDCI 
) const
overridevirtual

This method will be invoked for all target nodes and for any target-independent nodes that the target has registered with invoke it for.

The semantics are as follows: ReturnValue: SDValue.Val == 0 - No change was made SDValue.Val == N - N was replaced, is dead, and is already handled. otherwise - N should be replaced by the returned Operand.

In addition, methods provided by DAGCombinerInfo may be used to perform more complex transformations.

Reimplemented fromllvm::TargetLowering.

Definition at line17676 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ABS,llvm::ISD::ADD,llvm::RISCVISD::ADD_VL,llvm::TargetLowering::DAGCombinerInfo::AddToWorklist(),llvm::TargetLoweringBase::allowsMemoryAccessForAlignment(),llvm::ISD::AND,assert(),llvm::ISD::BITCAST,llvm::ISD::BITREVERSE,llvm::EVT::bitsLE(),llvm::MVT::bitsLE(),llvm::MVT::bitsLT(),llvm::RISCVISD::BR_CC,llvm::ISD::BUILD_VECTOR,llvm::RISCVISD::BuildPairF64,llvm::CallingConv::C,CC,llvm::RISCVISD::CLZW,combine_CC(),combineBinOpOfExtractToReduceTree(),combineBinOpOfZExt(),combineBinOpToReduce(),combineOp_VLToVWOp_VL(),combineScalarCTPOPToVCPOP(),llvm::TargetLowering::DAGCombinerInfo::CombineTo(),combineToVWMACC(),combineTruncOfSraSext(),combineTruncToVnclip(),llvm::ISD::CONCAT_VECTORS,Cond,llvm::ISD::CTPOP,llvm::RISCVISD::CTZW,llvm::RISCVISD::CZERO_EQZ,llvm::RISCVISD::CZERO_NEZ,llvm::TargetLowering::DAGCombinerInfo::DAG,llvm::ISD::DELETED_NODE,DL,llvm::ISD::EXTLOAD,llvm::ISD::EXTRACT_SUBVECTOR,llvm::ISD::EXTRACT_VECTOR_ELT,llvm::ISD::FABS,llvm::ISD::FADD,llvm::RISCVISD::FADD_VL,llvm::ISD::FCOPYSIGN,llvm::ISD::FMAXNUM,llvm::ISD::FMINNUM,llvm::ISD::FMUL,llvm::RISCVISD::FMUL_VL,llvm::RISCVISD::FMV_H_X,llvm::RISCVISD::FMV_W_X_RV64,llvm::RISCVISD::FMV_X_ANYEXTH,llvm::RISCVISD::FMV_X_ANYEXTW_RV64,llvm::ISD::FNEG,llvm::ISD::FP_EXTEND,llvm::ISD::FP_ROUND,llvm::ISD::FP_TO_SINT,llvm::ISD::FP_TO_SINT_SAT,llvm::ISD::FP_TO_UINT,llvm::ISD::FP_TO_UINT_SAT,llvm::RISCVISD::FSGNJX,llvm::RISCVISD::FSUB_VL,llvm::SelectionDAG::getAllOnesConstant(),llvm::SelectionDAG::getBitcast(),llvm::APInt::getBitsSetFrom(),llvm::SelectionDAG::getBuildVector(),llvm::SelectionDAG::getConstant(),llvm::SDValue::getConstantOperandAPInt(),llvm::SDValue::getConstantOperandVal(),llvm::SelectionDAG::getContext(),llvm::SelectionDAG::getDataLayout(),llvm::SelectionDAG::getElementCount(),llvm::SelectionDAG::getExtLoad(),llvm::SelectionDAG::getFPExtendOrRound(),llvm::SelectionDAG::getFreeze(),llvm::SelectionDAG::getGatherVP(),llvm::MVT::getIntegerVT(),llvm::RISCVMatInt::getIntMatCost(),llvm::details::FixedOrScalableQuantity< LeafTy, ValueTy >::getKnownMinValue(),getLMUL1VT(),llvm::SelectionDAG::getLoad(),llvm::APInt::getLowBitsSet(),llvm::SelectionDAG::getMaskedGather(),llvm::SelectionDAG::getMaskedLoad(),llvm::SelectionDAG::getMaskedScatter(),llvm::SelectionDAG::getMaskedStore(),getMaskTypeFor(),llvm::SelectionDAG::getMergeValues(),llvm::SelectionDAG::getNegative(),llvm::SDValue::getNode(),llvm::SelectionDAG::getNode(),llvm::SDValue::getNumOperands(),llvm::SDValue::getOpcode(),llvm::SDNode::getOpcode(),llvm::SDValue::getOperand(),llvm::SDNode::getOperand(),llvm::TargetLoweringBase::getPointerTy(),llvm::SelectionDAG::getRegister(),llvm::EVT::getRISCVVectorTupleNumFields(),llvm::MVT::getScalableVectorVT(),llvm::MVT::getScalarSizeInBits(),llvm::EVT::getScalarStoreSize(),llvm::SDValue::getScalarValueSizeInBits(),llvm::SelectionDAG::getScatterVP(),llvm::SelectionDAG::getSetCC(),llvm::ISD::getSetCCInverse(),llvm::SelectionDAG::getSignedConstant(),llvm::APInt::getSignMask(),llvm::EVT::getSimpleVT(),llvm::EVT::getSizeInBits(),llvm::MVT::getSizeInBits(),llvm::SelectionDAG::getSplat(),llvm::SelectionDAG::getStore(),llvm::SelectionDAG::getStoreVP(),llvm::SelectionDAG::getStridedLoadVP(),llvm::SelectionDAG::getUNDEF(),llvm::SDValue::getValue(),llvm::SDValue::getValueSizeInBits(),llvm::SDValue::getValueType(),llvm::EVT::getVectorElementCount(),llvm::EVT::getVectorElementType(),llvm::MVT::getVectorElementType(),llvm::SelectionDAG::getVectorIdxConstant(),llvm::EVT::getVectorNumElements(),llvm::SelectionDAG::getVectorShuffle(),llvm::EVT::getVectorVT(),llvm::SelectionDAG::getVTList(),llvm::RISCVSubtarget::getXLen(),llvm::RISCVSubtarget::getXLenVT(),llvm::RISCVSubtarget::hasConditionalMoveFusion(),llvm::SDValue::hasOneUse(),llvm::SDNode::hasOneUse(),llvm::Hi,llvm::ISD::INSERT_SUBVECTOR,llvm::ISD::INSERT_VECTOR_ELT,llvm::APInt::insertBits(),llvm::ISD::INTRINSIC_VOID,llvm::ISD::INTRINSIC_W_CHAIN,llvm::ISD::INTRINSIC_WO_CHAIN,llvm::RISCVSubtarget::is64Bit(),llvm::TargetLowering::DAGCombinerInfo::isAfterLegalizeDAG(),llvm::isAllOnesConstant(),llvm::TargetLowering::DAGCombinerInfo::isBeforeLegalize(),llvm::ISD::isBuildVectorOfConstantSDNodes(),llvm::EVT::isFixedLengthVector(),llvm::ISD::isIntEqualitySetCC(),llvm::ISD::isNormalLoad(),llvm::ISD::isNormalStore(),llvm::isNullConstant(),llvm::isOneConstant(),llvm::TargetLoweringBase::isOperationLegal(),llvm::isPowerOf2_64(),llvm::EVT::isRISCVVectorTuple(),llvm::MVT::isScalableVector(),llvm::EVT::isScalarInteger(),isSimpleVIDSequence(),llvm::TargetLoweringBase::isTypeLegal(),llvm::SDValue::isUndef(),llvm::SDNode::isUndef(),llvm::EVT::isVector(),legalizeScatterGatherIndexType(),LHS,llvm::Lo,llvm::ISD::LOAD,llvm::SelectionDAG::MaskedValueIsZero(),matchIndexAsShuffle(),matchIndexAsWiderOp(),matchSplatAsGather(),llvm::ISD::MGATHER,llvm::ISD::MSCATTER,llvm::ISD::MUL,llvm::RISCVISD::MUL_VL,N,narrowIndex(),llvm::ISD::NON_EXTLOAD,llvm::ISD::OR,performADDCombine(),performANDCombine(),performBITREVERSECombine(),performBUILD_VECTORCombine(),performCONCAT_VECTORSCombine(),performFP_TO_INT_SATCombine(),performFP_TO_INTCombine(),performINSERT_VECTOR_ELTCombine(),performMemPairCombine(),performMULCombine(),performORCombine(),performSELECTCombine(),performSETCCCombine(),performSIGN_EXTEND_INREGCombine(),performSRACombine(),performSUBCombine(),performTRUNCATECombine(),performVECTOR_SHUFFLECombine(),performVFMADD_VLCombine(),performVP_REVERSECombine(),performVP_STORECombine(),performVSELECTCombine(),performVWADDSUBW_VLCombine(),performXORCombine(),llvm::SmallVectorTemplateBase< T, bool >::push_back(),llvm::TargetLowering::DAGCombinerInfo::recursivelyDeleteUnusedNodes(),llvm::SelectionDAG::ReplaceAllUsesOfValueWith(),RHS,llvm::RISCVISD::ROLW,llvm::RISCVISD::RORW,llvm::ISD::SDIV,llvm::ISD::SELECT,llvm::RISCVISD::SELECT_CC,llvm::ISD::SETCC,llvm::ISD::SETEQ,llvm::ISD::SETGE,llvm::ISD::SETLT,llvm::ISD::SETNE,llvm::APInt::sext(),llvm::ISD::SHL,llvm::RISCVISD::SHL_VL,llvm::ISD::SIGN_EXTEND,llvm::ISD::SIGN_EXTEND_INREG,llvm::TargetLowering::SimplifyDemandedBits(),llvm::RISCVISD::SLLW,llvm::ISD::SMAX,llvm::ISD::SMIN,llvm::Splat,llvm::ISD::SPLAT_VECTOR,llvm::RISCVISD::SPLAT_VECTOR_SPLIT_I64_VL,llvm::RISCVISD::SplitF64,llvm::ISD::SRA,llvm::RISCVISD::SRA_VL,llvm::RISCVISD::SRAW,llvm::ISD::SREM,llvm::ISD::SRL,llvm::RISCVISD::SRL_VL,llvm::RISCVISD::SRLW,llvm::ISD::STORE,llvm::ISD::STRICT_FP_TO_UINT,llvm::RISCVISD::STRICT_VFMADD_VL,llvm::RISCVISD::STRICT_VFMSUB_VL,llvm::RISCVISD::STRICT_VFNMADD_VL,llvm::RISCVISD::STRICT_VFNMSUB_VL,llvm::ISD::SUB,llvm::RISCVISD::SUB_VL,std::swap(),llvm::APInt::trunc(),llvm::ISD::TRUNCATE,llvm::RISCVISD::TRUNCATE_VECTOR_VL,llvm::RISCVISD::TUPLE_INSERT,llvm::ISD::UDIV,llvm::ISD::UMAX,llvm::ISD::UMIN,llvm::ISD::UNINDEXED,llvm::ISD::UNSIGNED_SCALED,llvm::ISD::UREM,llvm::RISCVSubtarget::useRVVForFixedLengthVectors(),llvm::ISD::VECTOR_SHUFFLE,llvm::RISCVISD::VFMADD_VL,llvm::RISCVISD::VFMSUB_VL,llvm::RISCVISD::VFMV_S_F_VL,llvm::RISCVISD::VFMV_V_F_VL,llvm::RISCVISD::VFNMADD_VL,llvm::RISCVISD::VFNMSUB_VL,llvm::RISCVISD::VFWADD_W_VL,llvm::RISCVISD::VFWSUB_W_VL,llvm::RISCVISD::VMV_S_X_VL,llvm::RISCVISD::VMV_V_X_VL,llvm::RISCVISD::VMV_X_S,llvm::ISD::VSELECT,llvm::RISCVISD::VWADD_W_VL,llvm::RISCVISD::VWADDU_W_VL,llvm::RISCVISD::VWSUB_W_VL,llvm::RISCVISD::VWSUBU_W_VL,llvm::ISD::XOR, andllvm::ISD::ZERO_EXTEND.

◆ preferredShiftLegalizationStrategy()

TargetLowering::ShiftLegalizationStrategy llvm::RISCVTargetLowering::preferredShiftLegalizationStrategy(SelectionDAGDAG,
SDNodeN,
unsigned ExpansionFactor 
) const
inlineoverridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line729 of fileRISCVISelLowering.h.

Referencesllvm::MachineFunction::getFunction(),llvm::SelectionDAG::getMachineFunction(),llvm::Function::hasMinSize(),llvm::TargetLoweringBase::LowerToLibcall,N, andllvm::TargetLoweringBase::preferredShiftLegalizationStrategy().

◆ preferScalarizeSplat()

bool RISCVTargetLowering::preferScalarizeSplat(SDNodeN) const
overridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22228 of fileRISCVISelLowering.cpp.

ReferencesN,llvm::ISD::SIGN_EXTEND, andllvm::ISD::ZERO_EXTEND.

◆ preferZeroCompareBranch()

bool llvm::RISCVTargetLowering::preferZeroCompareBranch() const
inlineoverridevirtual

Return true if the heuristic to prefer icmp eq zero should be used in code gen prepare.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line704 of fileRISCVISelLowering.h.

◆ ReplaceNodeResults()

void RISCVTargetLowering::ReplaceNodeResults(SDNode,
SmallVectorImpl<SDValue > & ,
SelectionDAG 
) const
overridevirtual

This callback is invoked when a node result type is illegal for the target, and the operation was registered to use 'custom' lowering for that result type.

The target places new result values for the node in Results (their number and types must exactly match those of the original return values of the node), or leaves Results empty, which indicates that the node is not to be custom lowered after all.

If the target has no operations that require custom lowering, it need not implement this. The default implementation aborts.

Reimplemented fromllvm::TargetLowering.

Definition at line12898 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ABS,llvm::RISCVISD::ABSW,llvm::ISD::ADD,llvm::ISD::ANY_EXTEND,assert(),llvm::ISD::BITCAST,llvm::EVT::bitsLT(),llvm::RISCVISD::BREV8,llvm::ISD::BUILD_PAIR,llvm::RISCVISD::CLMUL,llvm::RISCVISD::CLMULH,llvm::RISCVISD::CLMULR,llvm::RISCVISD::CLZW,llvm::SelectionDAG::ComputeNumSignBits(),llvm::ISD::Constant,convertToScalableVector(),llvm::ISD::CTLZ,llvm::ISD::CTLZ_ZERO_UNDEF,llvm::ISD::CTTZ,llvm::ISD::CTTZ_ZERO_UNDEF,llvm::RISCVISD::CTZW,customLegalizeToWOp(),customLegalizeToWOpWithSExt(),DL,llvm::TargetLowering::expandAddSubSat(),llvm::ISD::EXTRACT_VECTOR_ELT,llvm::RISCVISD::FCVT_W_RV64,llvm::RISCVISD::FCVT_WU_RV64,llvm::RISCVISD::FMV_X_ANYEXTH,llvm::RISCVISD::FMV_X_ANYEXTW_RV64,llvm::ISD::FP_EXTEND,llvm::ISD::FP_TO_SINT,llvm::ISD::FP_TO_UINT,llvm::ISD::GET_ROUNDING,llvm::Function::getAttributes(),llvm::LoadSDNode::getBasePtr(),llvm::SelectionDAG::getBitcast(),llvm::MemSDNode::getChain(),llvm::SelectionDAG::getConstant(),getContainerForFixedLengthVector(),llvm::SelectionDAG::getContext(),getDefaultVLOps(),llvm::SelectionDAG::getExtLoad(),llvm::RTLIB::getFPTOSINT(),llvm::RTLIB::getFPTOUINT(),llvm::SelectionDAG::getFreeze(),llvm::MachineFunction::getFunction(),llvm::APInt::getHighBitsSet(),llvm::SelectionDAG::getMachineFunction(),llvm::MemSDNode::getMemOperand(),llvm::MemSDNode::getMemoryVT(),llvm::SelectionDAG::getNode(),llvm::SelectionDAG::getSetCC(),llvm::SDValue::getSimpleValueType(),llvm::SelectionDAG::getTargetConstant(),llvm::TargetLoweringBase::getTypeAction(),llvm::SelectionDAG::getUNDEF(),llvm::SDValue::getValue(),llvm::SDValue::getValueType(),llvm::SelectionDAG::getValueType(),llvm::MVT::getVectorElementType(),llvm::SelectionDAG::getVectorIdxConstant(),llvm::EVT::getVectorVT(),getVSlidedown(),llvm::SelectionDAG::getVTList(),llvm::RISCVSubtarget::getXLen(),llvm::RISCVSubtarget::getXLenVT(),llvm::RISCVSubtarget::hasStdExtDOrZdinx(),llvm::RISCVSubtarget::hasStdExtFOrZfinx(),llvm::RISCVSubtarget::hasStdExtZfhminOrZhinxmin(),llvm::RISCVSubtarget::hasStdExtZfhOrZhinx(),llvm::Hi,Idx,llvm::ISD::INTRINSIC_WO_CHAIN,llvm::RISCVSubtarget::is64Bit(),llvm::isAllOnesConstant(),llvm::EVT::isFixedLengthVector(),llvm::MVT::isFixedLengthVector(),isIntDivCheap(),llvm::EVT::isInteger(),llvm::ISD::isNON_EXTLoad(),llvm::isNullConstant(),llvm::isOneConstant(),llvm::TargetLoweringBase::isTypeLegal(),llvm::EVT::isVector(),LHS,llvm_unreachable,llvm::Lo,llvm::ISD::LOAD,lowerCttzElts(),lowerGetVectorLength(),llvm::ISD::LROUND,llvm::TargetLowering::makeLibCall(),llvm::SelectionDAG::MaskedValueIsZero(),llvm::RISCVISD::MOPR,llvm::RISCVISD::MOPRR,llvm::ISD::MUL,llvm::RISCVISD::MULHSU,N,llvm::RISCVISD::ORC_B,llvm::RISCVISD::READ_COUNTER_WIDE,llvm::ISD::READCYCLECOUNTER,llvm::ISD::READSTEADYCOUNTER,Results,RHS,llvm::RISCVFPRndMode::RMM,llvm::ISD::ROTL,llvm::ISD::ROTR,llvm::RISCVFPRndMode::RTZ,llvm::ISD::SADDO,llvm::ISD::SADDSAT,llvm::ISD::SDIV,llvm::ISD::SETEQ,llvm::ISD::SETLT,llvm::ISD::SETNE,llvm::TargetLowering::MakeLibCallOptions::setTypeListBeforeSoften(),llvm::ISD::SETUGT,llvm::ISD::SETULT,llvm::ISD::SEXTLOAD,llvm::RISCVISD::SHA256SIG0,llvm::RISCVISD::SHA256SIG1,llvm::RISCVISD::SHA256SUM0,llvm::RISCVISD::SHA256SUM1,llvm::ISD::SHL,llvm::ISD::SIGN_EXTEND,llvm::ISD::SIGN_EXTEND_INREG,Size,llvm::RISCVISD::SM3P0,llvm::RISCVISD::SM3P1,llvm::RISCVISD::SM4ED,llvm::RISCVISD::SM4KS,llvm::RISCVISD::SplitF64,llvm::ISD::SRA,llvm::ISD::SRL,llvm::RISCVISD::SRL_VL,llvm::ISD::SSUBSAT,llvm::RISCVISD::STRICT_FCVT_W_RV64,llvm::RISCVISD::STRICT_FCVT_WU_RV64,llvm::ISD::STRICT_FP_EXTEND,llvm::ISD::STRICT_FP_TO_SINT,llvm::ISD::STRICT_FP_TO_UINT,llvm::ISD::SUB,llvm::ISD::TRUNCATE,llvm::TargetLoweringBase::TypeSoftenFloat,llvm::ISD::UADDO,llvm::ISD::UADDSAT,llvm::ISD::UDIV,llvm::ISD::UREM,llvm::ISD::USUBO,llvm::ISD::USUBSAT,llvm::ISD::VECREDUCE_ADD,llvm::ISD::VECREDUCE_AND,llvm::ISD::VECREDUCE_OR,llvm::ISD::VECREDUCE_SMAX,llvm::ISD::VECREDUCE_SMIN,llvm::ISD::VECREDUCE_UMAX,llvm::ISD::VECREDUCE_UMIN,llvm::ISD::VECREDUCE_XOR,llvm::RISCVISD::VMV_V_X_VL,llvm::RISCVISD::VMV_X_S,llvm::ISD::XOR, andllvm::ISD::ZERO_EXTEND.

◆ shouldConsiderGEPOffsetSplit()

bool llvm::RISCVTargetLowering::shouldConsiderGEPOffsetSplit() const
inlineoverridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line781 of fileRISCVISelLowering.h.

◆ shouldConvertConstantLoadToIntImm()

bool RISCVTargetLowering::shouldConvertConstantLoadToIntImm(constAPIntImm,
TypeTy 
) const
overridevirtual

Return true if it is beneficial to convert a load of a constant to just the constant itself.

On some targets it might be more efficient to use a combination of arithmetic instructions to materialize the constant instead of loading it from a constant pool.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2062 of fileRISCVISelLowering.cpp.

Referencesassert(),llvm::RISCVMatInt::generateInstSeq(),llvm::Type::getIntegerBitWidth(),llvm::RISCVSubtarget::getMaxBuildIntsCost(),llvm::RISCVSubtarget::getXLen(),llvm::Type::isIntegerTy(), andllvm::SmallVectorBase< Size_T >::size().

◆ shouldConvertFpToSat()

bool RISCVTargetLowering::shouldConvertFpToSat(unsigned Op,
EVT FPVT,
EVT VT 
) const
overridevirtual

Should we generate fp_to_si_sat and fp_to_ui_sat from type FPVT to type VT from min(max(fptoi)) saturation patterns.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21716 of fileRISCVISelLowering.cpp.

Referencesllvm::EVT::getSimpleVT(),llvm::TargetLoweringBase::isOperationLegalOrCustom(),llvm::EVT::isSimple(), andllvm::MVT::SimpleTy.

◆ shouldExpandAtomicCmpXchgInIR()

TargetLowering::AtomicExpansionKind RISCVTargetLowering::shouldExpandAtomicCmpXchgInIR(AtomicCmpXchgInstAI) const
overridevirtual

Returns how the given atomic cmpxchg should be expanded by the IR-level AtomicExpand pass.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21673 of fileRISCVISelLowering.cpp.

Referencesllvm::AtomicCmpXchgInst::getCompareOperand(),llvm::Type::getPrimitiveSizeInBits(),llvm::Value::getType(),llvm::TargetLoweringBase::MaskedIntrinsic,llvm::TargetLoweringBase::None, andSize.

◆ shouldExpandAtomicRMWInIR()

TargetLowering::AtomicExpansionKind RISCVTargetLowering::shouldExpandAtomicRMWInIR(AtomicRMWInstRMW) const
overridevirtual

Returns how the IR-level AtomicExpand pass should expand the given AtomicRMW, if at all.

Default is to never expand.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21532 of fileRISCVISelLowering.cpp.

Referencesllvm::TargetLoweringBase::CmpXChg,llvm::AtomicRMWInst::getOperation(),llvm::Type::getPrimitiveSizeInBits(),llvm::Value::getType(),llvm::AtomicRMWInst::isFloatingPointOperation(),llvm::TargetLoweringBase::MaskedIntrinsic,llvm::AtomicRMWInst::Nand,llvm::TargetLoweringBase::None,Size,llvm::AtomicRMWInst::UDecWrap,llvm::AtomicRMWInst::UIncWrap,llvm::AtomicRMWInst::USubCond, andllvm::AtomicRMWInst::USubSat.

◆ shouldExpandBuildVectorWithShuffles()

bool RISCVTargetLowering::shouldExpandBuildVectorWithShuffles(EVT VT,
unsigned DefinedValues 
) const
overridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2821 of fileRISCVISelLowering.cpp.

◆ shouldExpandCttzElements()

bool RISCVTargetLowering::shouldExpandCttzElements(EVT VT) const
overridevirtual

Return true if the @llvm.experimental.cttz.elts intrinsic should be expanded using generic code inSelectionDAGBuilder.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1616 of fileRISCVISelLowering.cpp.

Referencesllvm::EVT::getVectorElementType(),llvm::RISCVSubtarget::hasVInstructions(), andllvm::TargetLoweringBase::isTypeLegal().

Referenced byllvm::RISCVTTIImpl::getIntrinsicInstrCost().

◆ shouldExtendTypeInLibCall()

bool RISCVTargetLowering::shouldExtendTypeInLibCall(EVT Type) const
overridevirtual

Returns true if arguments should be extended in lib calls.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21905 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVSubtarget::getXLen(), andllvm::RISCVSubtarget::isSoftFPABI().

◆ shouldFoldSelectWithIdentityConstant()

bool RISCVTargetLowering::shouldFoldSelectWithIdentityConstant(unsigned BinOpcode,
EVT VT 
) const
overridevirtual

Return true if pulling a binary operation into a select with an identity constant is profitable.

This is the inverse of an IR transform. Example: X + (Cond ? Y : 0) --> Cond ? (X + Y) : X

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2050 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVSubtarget::hasVInstructions(),llvm::EVT::isFixedLengthVector(),llvm::TargetLoweringBase::isTypeLegal(), andllvm::EVT::isVector().

◆ shouldFormOverflowOp()

bool llvm::RISCVTargetLowering::shouldFormOverflowOp(unsigned Opcode,
EVT VT,
bool MathUsed 
) const
inlineoverridevirtual

Try to convert math with an overflow comparison into the corresponding DAG node operation.

Targets may want to override this independently of whether the operation is legal/custom for the given type because it may obscure matching of other patterns.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line680 of fileRISCVISelLowering.h.

Referencesllvm::TargetLoweringBase::shouldFormOverflowOp().

◆ shouldInsertFencesForAtomic()

bool RISCVTargetLowering::shouldInsertFencesForAtomic(constInstructionI) const
overridevirtual

WhetherAtomicExpandPass should automatically insert fences and reduce ordering for this atomic.

This should be true for most architectures with weak memory ordering. Defaults to false.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line22712 of fileRISCVISelLowering.cpp.

ReferencesI, andllvm::SequentiallyConsistent.

◆ shouldProduceAndByConstByHoistingConstFromShiftsLHSOfAnd()

bool RISCVTargetLowering::shouldProduceAndByConstByHoistingConstFromShiftsLHSOfAnd(SDValue X,
ConstantSDNodeXC,
ConstantSDNodeCC,
SDValue Y,
unsigned OldShiftOpcode,
unsigned NewShiftOpcode,
SelectionDAGDAG 
) const
overridevirtual

Given the pattern (X & (C l>>/<< Y)) ==/!= 0 return true if it should be transformed into: ((X <</l>> Y) & C) ==/!= 0 WARNING: if 'X' is a constant, the fold may deadlock! FIXME: we could avoid passing XC, but we can't useisConstOrConstSplat() here because it can end up being not linked in.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2092 of fileRISCVISelLowering.cpp.

ReferencesCC, andllvm::ISD::SRL.

◆ shouldRemoveExtendFromGSIndex()

bool RISCVTargetLowering::shouldRemoveExtendFromGSIndex(SDValue Extend,
EVT DataVT 
) const
overridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21706 of fileRISCVISelLowering.cpp.

Referencesllvm::SDValue::getOpcode(),llvm::SDValue::getOperand(),llvm::SDValue::getValueType(),llvm::EVT::getVectorElementType(),llvm::TargetLoweringBase::isTypeLegal(), andllvm::ISD::ZERO_EXTEND.

◆ shouldScalarizeBinop()

bool RISCVTargetLowering::shouldScalarizeBinop(SDValue VecOp) const
overridevirtual

Try to convert an extract element of a vector binary operation into an extract element followed by a scalar operation.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line2116 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::BUILTIN_OP_END,llvm::SDValue::getOpcode(),llvm::EVT::getScalarType(),llvm::SDValue::getValueType(),llvm::TargetLoweringBase::isBinOp(),llvm::TargetLoweringBase::isOperationCustom(), andllvm::TargetLoweringBase::isOperationLegalOrCustomOrPromote().

◆ shouldSignExtendTypeInLibCall()

bool RISCVTargetLowering::shouldSignExtendTypeInLibCall(TypeTy,
bool IsSigned 
) const
overridevirtual

Returns true if arguments should be sign-extended in lib calls.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line21915 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVSubtarget::is64Bit(), andllvm::Type::isIntegerTy().

◆ shouldTransformSignedTruncationCheck()

bool RISCVTargetLowering::shouldTransformSignedTruncationCheck(EVT XVT,
unsigned KeptBits 
) const
overridevirtual

Should we tranform the IR-optimal check for whether given truncation down into KeptBits would be truncating or not: (add x, (1 << (KeptBits-1))) srccond (1 << KeptBits) Into it's more traditional form: ((x << C) a>> C) dstcond x Return true if we should transform.

Return false if there is no preference.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line18677 of fileRISCVISelLowering.cpp.

Referencesllvm::RISCVSubtarget::is64Bit(), andllvm::EVT::isVector().

◆ signExtendConstant()

bool RISCVTargetLowering::signExtendConstant(constConstantIntC) const
overridevirtual

Return true if this constant should be sign extended when promoting to a larger type.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line1997 of fileRISCVISelLowering.cpp.

Referencesllvm::Value::getType(),llvm::RISCVSubtarget::is64Bit(), andllvm::Type::isIntegerTy().

◆ softPromoteHalfType()

bool llvm::RISCVTargetLowering::softPromoteHalfType() const
inlineoverridevirtual

Reimplemented fromllvm::TargetLoweringBase.

Definition at line554 of fileRISCVISelLowering.h.

◆ splitValueIntoRegisterParts()

bool RISCVTargetLowering::splitValueIntoRegisterParts(SelectionDAGDAG,
constSDLocDL,
SDValue Val,
SDValueParts,
unsigned NumParts,
MVT PartVT,
std::optional<CallingConv::IDCC 
) const
overridevirtual

Target-specific splitting of values into parts that fit a register storing a legal type.

Reimplemented fromllvm::TargetLowering.

Definition at line22059 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::ANY_EXTEND,assert(),llvm::ISD::BITCAST,llvm::RISCVISD::BuildGPRPair,CC,llvm::divideCeil(),DL,llvm::SelectionDAG::getBitcast(),llvm::SelectionDAG::getConstant(),llvm::SelectionDAG::getContext(),llvm::EVT::getFixedSizeInBits(),llvm::details::FixedOrScalableQuantity< LeafTy, ValueTy >::getKnownMinValue(),llvm::SelectionDAG::getNode(),llvm::EVT::getRISCVVectorTupleNumFields(),llvm::MVT::getRISCVVectorTupleNumFields(),llvm::EVT::getSizeInBits(),llvm::MVT::getSizeInBits(),llvm::SelectionDAG::getUNDEF(),llvm::SDValue::getValueType(),llvm::EVT::getVectorElementType(),llvm::MVT::getVectorElementType(),llvm::SelectionDAG::getVectorIdxConstant(),llvm::EVT::getVectorVT(),llvm::RISCVSubtarget::getXLenVT(),llvm::Hi,llvm::ISD::INSERT_SUBVECTOR,llvm::RISCVSubtarget::is64Bit(),llvm::EVT::isRISCVVectorTuple(),llvm::MVT::isRISCVVectorTuple(),llvm::EVT::isScalableVector(),llvm::MVT::isScalableVector(),llvm::Lo,llvm::ISD::OR,llvm::RISCV::RVVBitsPerBlock,llvm::SelectionDAG::SplitScalar(), andllvm::RISCVISD::TUPLE_INSERT.

◆ storeOfVectorConstantIsCheap()

bool llvm::RISCVTargetLowering::storeOfVectorConstantIsCheap(bool IsZero,
EVT MemVT,
unsigned NumElem,
unsigned AddrSpace 
) const
inlineoverridevirtual

Return true if it is expected to be cheaper to do a store of vector constant with the given size and type for the address space than to store the individual scalar element constants.

Reimplemented fromllvm::TargetLoweringBase.

Definition at line688 of fileRISCVISelLowering.h.

◆ supportKCFIBundles()

bool llvm::RISCVTargetLowering::supportKCFIBundles() const
inlineoverridevirtual

Return true if the target supports kcfi operand bundles.

Reimplemented fromllvm::TargetLowering.

Definition at line913 of fileRISCVISelLowering.h.

◆ targetShrinkDemandedConstant()

bool RISCVTargetLowering::targetShrinkDemandedConstant(SDValue Op,
constAPIntDemandedBits,
constAPIntDemandedElts,
TargetLoweringOptTLO 
) const
overridevirtual

Reimplemented fromllvm::TargetLowering.

Definition at line18784 of fileRISCVISelLowering.cpp.

Referencesllvm::ISD::AND,assert(),llvm::CallingConv::C,llvm::TargetLowering::TargetLoweringOpt::CombineTo(),llvm::TargetLowering::TargetLoweringOpt::DAG,DL,llvm::SelectionDAG::getConstant(),llvm::SelectionDAG::getNode(),llvm::APInt::getSignificantBits(),llvm::APInt::isNegative(),llvm::APInt::isSignedIntN(),llvm::APInt::isSubsetOf(),llvm::EVT::isVector(),llvm::TargetLowering::TargetLoweringOpt::LegalOps,llvm::ISD::OR,llvm::APInt::setBitsFrom(), andllvm::ISD::XOR.


The documentation for this class was generated from the following files:

Generated on Sun Jul 20 2025 19:49:27 for LLVM by doxygen 1.9.6
[8]ページ先頭

©2009-2025 Movatter.jp