Next:Target-Specific Parameters, Up:Parameters [Contents][Index]
The following choices ofname are recognized for all targets:
auto-profile-bbs ¶If non-zero and used together with-fauto-profile, the auto-profileis used to determine basic block profile. If zero, then only functionlevel profile is read.
auto-profile-reorder-only ¶Enable only function reordering with auto-profile.
phiopt-factor-max-stmts-live ¶When factoring statements out of if/then/else, this is the maximum numberof statementsafter the defining statement to be allowed to extend the lifetime of a name.
predictable-branch-outcome ¶When branch is predicted to be taken with probability lower than this threshold(as a percentage), then it is considered well-predictable.
max-rtl-if-conversion-insns ¶RTL if-conversion tries to remove conditional branches around a block andreplace them with conditionally executed instructions. This parametergives the maximum number of instructions in a block that should beconsidered for if-conversion. The compileralso uses other heuristics to decide whether if-conversion is likely to beprofitable.
file-cache-files ¶The maximum number of files in the file cache.The file cache is used to print source lines in diagnostics and do somesource checks like-Wmisleading-indentation.
file-cache-lines ¶Maximum number of lines to index into file cache.When zero this is automatically sized.The file cache is used to print source lines in diagnostics and do somesource checks like-Wmisleading-indentation.
max-rtl-if-conversion-predictable-cost ¶RTL if-conversion tries to remove conditional branches around a blockand replace them with conditionally executed instructions. These parametersgive the maximum permissible cost for the sequence that would be generatedby if-conversion depending on whether the branch is statically determinedto be predictable or not. The units for this parameter are the same asthose for the GCC internal seq_cost metric. The compiler tries toprovide a reasonable default for this parameter using theBRANCH_COSTtarget macro.
max-crossjump-edges ¶The maximum number of incoming edges to consider for cross-jumping.The algorithm used by-fcrossjumping isO(N^2) inthe number of edges incoming to each block. Increasing values meanmore aggressive optimization, making the compilation time increase withprobably small improvement in executable size.
min-crossjump-insns ¶The minimum number of instructions that must be matched at the endof two blocks before cross-jumping is performed on them. Thisvalue is ignored in the case where all instructions in the block beingcross-jumped from are matched.
max-grow-copy-bb-insns ¶The maximum code size expansion factor when copying basic blocksinstead of jumping. The expansion is relative to a jump instruction.
max-goto-duplication-insns ¶The maximum number of instructions to duplicate to a block that jumpsto a computed goto. To avoidO(N^2) behavior in a number ofpasses, GCC factors computed gotos early in the compilation process,and unfactors them as late as possible. Only computed jumps at theend of a basic blocks with no more thanmax-goto-duplication-insns areunfactored.
max-delay-slot-insn-search ¶The maximum number of instructions to consider when looking for aninstruction to fill a delay slot. If more than this arbitrary number ofinstructions are searched, the time savings from filling the delay slotare minimal, so stop searching. Increasing values mean moreaggressive optimization, making the compilation time increase with probablysmall improvement in execution time.
max-delay-slot-live-search ¶When trying to fill delay slots, the maximum number of instructions toconsider when searching for a block with valid live registerinformation. Increasing this arbitrarily-chosen value means moreaggressive optimization, increasing the compilation time. This parametershould be removed when the delay slot code is rewritten to maintain thecontrol-flow graph.
max-devirt-targets ¶This limits number of function a virtual call may be speculativelydevirtualized to using static analysis (without profile feedback).
max-gcse-memory ¶The approximate maximum amount of memory inkB that can be allocated inorder to perform the global common subexpression eliminationoptimization. If more memory than specified is required, theoptimization is not done.
max-gcse-insertion-ratio ¶If the ratio of expression insertions to deletions is larger than this valuefor any expression, then RTL PRE inserts or removes the expression and thusleaves partially redundant computations in the instruction stream.
max-pending-list-length ¶The maximum number of pending dependencies scheduling allowsbefore flushing the current state and starting over. Large functionswith few branches or calls can create excessively large lists whichneedlessly consume memory and resources.
max-modulo-backtrack-attempts ¶The maximum number of backtrack attempts the scheduler should makewhen modulo scheduling a loop. Larger values can exponentially increasecompilation time.
max-inline-functions-called-once-loop-depth ¶Maximal loop depth of a call considered by inline heuristics that tries toinline all functions called once.
max-inline-functions-called-once-insns ¶Maximal estimated size of functions produced while inlining functions calledonce.
max-inline-insns-single ¶Several parameters control the tree inliner used in GCC. This number sets themaximum number of instructions (counted in GCC’s internal representation) in asingle function that the tree inliner considers for inlining. This onlyaffects functions declared inline and methods implemented in a classdeclaration (C++).
max-inline-insns-auto ¶When you use-finline-functions (included in-O3),a lot of functions that would otherwise not be considered for inliningby the compiler are investigated. To those functions, a different(more restrictive) limit compared to functions declared inline canbe applied (--param max-inline-insns-auto).
max-inline-insns-small ¶This is the bound applied to calls that are considered relevant with-finline-small-functions.
max-inline-insns-size ¶This is the bound applied to calls that are optimized for size. Small growthmay be desirable to anticipate optimization opportunities exposed by inlining.
uninlined-function-insns ¶Number of instructions accounted by inliner for function overhead such asfunction prologue and epilogue.
uninlined-function-time ¶Extra time accounted by inliner for function overhead such as time needed toexecute function prologue and epilogue.
inline-heuristics-hint-percent ¶The scale (as a percentage) applied toinline-insns-single,inline-insns-single-O2,inline-insns-autowhen inline heuristics hint that inlining isvery profitable. It enables later optimizations.
uninlined-thunk-insns ¶uninlined-thunk-timeSame as--param uninlined-function-insns and--param uninlined-function-time, but applied to function thunks.
inline-min-speedup ¶When estimated performance improvement of caller + callee runtime exceeds thisthreshold (as a percentage),the function can be inlined regardless of the limit on--param max-inline-insns-single and--parammax-inline-insns-auto.
large-function-insns ¶The limit specifying really large functions. For functions larger than thislimit after inlining, inlining is constrained by--param large-function-growth. This parameter is useful primarilyto avoid extreme compilation time caused by non-linear algorithms used by theback end.
large-function-growth ¶Specifies maximal growth of large functions caused by inlining,as a percentage.For example, a parameter value of 100 limits large function growth to 2.0 timesthe original size.
large-unit-insns ¶The limit specifying large translation unit. Growth caused by inlining ofunits larger than this limit is limited by--param inline-unit-growth.For small units this might be too tight.For example, consider a unit consisting of function Athat is inline and B that just calls A three times. If B is small relative toA, the growth of unit is 300\% and yet such inlining is very sane. For verylarge units consisting of small inlineable functions, however, the overall unitgrowth limit is needed to avoid exponential explosion of code size. Thus forsmaller units, the size is increased to--param large-unit-insnsbefore applying--param inline-unit-growth.
lazy-modules ¶Maximum number of concurrently open C++ module files when lazy loading.
inline-unit-growth ¶Specifies maximal overall growth of the compilation unit caused by inlining.For example, parameter value 20 limits unit growth to 1.2 times the originalsize. Cold functions (either marked cold via an attribute or by profilefeedback) are not accounted into the unit size.
ipa-cp-unit-growth ¶Specifies maximal overall growth of the compilation unit caused byinterprocedural constant propagation. For example, parameter value 10 limitsunit growth to 1.1 times the original size.
ipa-cp-large-unit-insns ¶The size of translation unit that IPA-CP pass considers large.
large-stack-frame ¶The limit specifying large stack frames. While inlining, the algorithm triesto not grow past this limit too much.
large-stack-frame-growth ¶Specifies maximal growth of large stack frames caused by inlining,as a percentage of the original size.For example, parameter value 1000 limits large stack frame growth to 11 timesthe original size.
max-inline-insns-recursive ¶max-inline-insns-recursive-autoSpecifies the maximum number of instructions an out-of-line copy of aself-recursive inlinefunction can grow into by performing recursive inlining.
--param max-inline-insns-recursive applies to functionsdeclared inline.For functions not declared inline, recursive inlininghappens only when-finline-functions (included in-O3) isenabled;--param max-inline-insns-recursive-auto applies instead.
max-inline-recursive-depth ¶max-inline-recursive-depth-autoSpecifies the maximum recursion depth used for recursive inlining.
--param max-inline-recursive-depth applies to functionsdeclared inline. For functions not declared inline, recursive inlininghappens only when-finline-functions (included in-O3) isenabled;--param max-inline-recursive-depth-auto applies instead.
min-inline-recursive-probability ¶Recursive inlining is profitable only for functions having deep recursionin average and can hurt for functions having little recursion depth byincreasing the prologue size or complexity of function body to otheroptimizers.
When profile feedback is available (see-fprofile-generate),the actualrecursion depth can be guessed from the probability that function recursesvia a given call expression. This parameter limits inlining only to callexpressions whose probability exceeds the given threshold (as a percentage).
early-inlining-insns ¶Specify growth that the early inliner can make. In effect it increasesthe amount of inlining for code having a large abstraction penalty.
max-early-inliner-iterations ¶Limit of iterations of the early inliner. This basically boundsthe number of nested indirect calls the early inliner can resolve.Deeper chains are still handled by late inlining.
comdat-sharing-probability ¶Probability (as a percentage) that C++ inline function with comdat visibilityare shared across multiple compilation units.
modref-max-bases ¶modref-max-refsmodref-max-accessesSpecifies the maximal number of base pointers, references and accesses storedfor a single function by mod/ref analysis.
modref-max-tests ¶Specifies the maxmal number of tests alias oracle can perform to disambiguatememory locations using the mod/ref information. This parameter ought to bebigger than--param modref-max-bases and--parammodref-max-refs.
modref-max-depth ¶Specifies the maximum depth of DFS walk used by modref escape analysis.Setting to 0 disables the analysis completely.
modref-max-escape-points ¶Specifies the maximum number of escape points tracked by modref per SSA-name.
modref-max-adjustments ¶Specifies the maximum number times the access range is enlarged duringmodref dataflow analysis.
profile-func-internal-id ¶A parameter to control whether to use function internal id in profiledatabase lookup. If the value is 0, the compiler uses an id thatis based on function assembler name and filename, which makes old profiledata more tolerant to source changes such as function reordering etc.
min-vect-loop-bound ¶The minimum number of iterations under which loops are not vectorizedwhen-ftree-vectorize is used. The number of iterations aftervectorization needs to be greater than the value specified by this optionto allow vectorization.
gcse-cost-distance-ratio ¶Scaling factor in calculation of maximum distance an expressioncan be moved by GCSE optimizations. This is currently supported only in thecode hoisting pass. The bigger the ratio, the more aggressive code hoistingis with simple expressions, i.e., the expressions that have costless thangcse-unrestricted-cost. Specifying 0 disableshoisting of simple expressions.
gcse-unrestricted-cost ¶Cost, roughly measured as the cost of a single typical machineinstruction, at which GCSE optimizations do not constrainthe distance an expression can travel. This is currentlysupported only in the code hoisting pass. The lesser the cost,the more aggressive code hoisting is. Specifying 0allows all expressions to travel unrestricted distances.
max-hoist-depth ¶The depth of search in the dominator tree for expressions to hoist.This is used to avoid quadratic behavior in hoisting algorithm.The value of 0 does not limit on the search, but may slow down compilationof huge functions.
max-tail-merge-comparisons ¶The maximum number of similar bbs to compare a bb with. This is used toavoid quadratic behavior in tree tail merging.
max-tail-merge-iterations ¶The maximum number of iterations of the tree tail merging pass over a function.This is used to limit compilation time in this pass.
store-merging-allow-unaligned ¶Allow the store merging pass to introduce unaligned stores if it is legal todo so.
max-stores-to-merge ¶The maximum number of stores to attempt to merge into wider stores in the storemerging pass.
max-store-chains-to-track ¶The maximum number of store chains to track at the same time in the attemptto merge them into wider stores in the store merging pass.
max-stores-to-track ¶The maximum number of stores to track at the same time in the attempt toto merge them into wider stores in the store merging pass.
max-unrolled-insns ¶The maximum number of instructions that a loop may have to be unrolled.If a loop is unrolled, this parameter also determines how many timesthe loop code is unrolled.
max-average-unrolled-insns ¶The maximum number of instructions biased by probabilities of their executionthat a loop may have to be unrolled. If a loop is unrolled,this parameter also determines how many times the loop code is unrolled.
max-unroll-times ¶The maximum number of unrollings of a single loop.
max-peeled-insns ¶The maximum number of instructions that a loop may have to be peeled.If a loop is peeled, this parameter also determines how many timesthe loop code is peeled.
max-peel-times ¶The maximum number of peelings of a single loop.
max-peel-branches ¶The maximum number of branches on the hot path through the peeled sequence.
max-completely-peeled-insns ¶The maximum number of insns of a completely peeled loop.
max-completely-peel-times ¶The maximum number of iterations of a loop to be suitable for complete peeling.
max-completely-peel-loop-nest-depth ¶The maximum depth of a loop nest suitable for complete peeling.
max-unswitch-insns ¶The maximum number of insns of an unswitched loop.
max-unswitch-depth ¶The maximum depth of a loop nest to be unswitched.
lim-expensive ¶The minimum cost of an expensive expression in the loop invariant motion pass.
min-loop-cond-split-prob ¶When FDO profile information is available,min-loop-cond-split-probspecifies the minimum threshold for probability of semi-invariant conditionstatement to trigger loop split. The value is a percentage.
iv-consider-all-candidates-bound ¶Bound on number of candidates for induction variables, below whichall candidates are considered for each use in induction variableoptimizations. If there are more candidates than this,only the most relevant ones are considered to avoid quadratic time complexity.
iv-max-considered-uses ¶The induction variable optimizations give up on loops that contain moreinduction variable uses than this limit.
iv-always-prune-cand-set-bound ¶This parameter is used by induction variable optimization.If the number of candidates in the iv set is larger than this value,always try to remove unnecessary ivs from the setwhen adding a new one.
avg-loop-niter ¶Average number of iterations of a loop.
dse-max-object-size ¶Maximum size (in bytes) of objects tracked bytewise by dead store elimination.Larger values may result in larger compilation times.
dse-max-alias-queries-per-store ¶Maximum number of queries into the alias oracle per store.Larger values result in larger compilation times and may result in moreremoved dead stores.
scev-max-expr-size ¶Bound on size of expressions used in the scalar evolutions analyzer.Large expressions slow the analyzer.
scev-max-expr-complexity ¶Bound on the complexity of the expressions in the scalar evolutions analyzer.Complex expressions slow the analyzer.
max-tree-if-conversion-phi-args ¶Maximum number of arguments in a PHI supported by TREE if conversionunless the loop is marked with a simd pragma.
vect-max-layout-candidates ¶The maximum number of possible vector layouts (such as permutations)to consider when optimizing to-be-vectorized code.
vect-max-version-for-alignment-checks ¶The maximum number of run-time checks that can be performed whendoing loop versioning for alignment in the vectorizer.
vect-max-version-for-alias-checks ¶The maximum number of run-time checks that can be performed whendoing loop versioning for alias in the vectorizer.
vect-max-peeling-for-alignment ¶The maximum number of loop peels to enhance access alignmentfor vectorizer. Value -1 means no limit.
max-iterations-to-track ¶The maximum number of iterations of a loop the brute-force algorithmfor analysis of the number of iterations of the loop tries to evaluate.
hot-bb-count-fraction ¶The denominatorn of fraction 1/nof the maximal execution count of abasic block in the entire program that a basic block needs to at leasthave in order to be considered hot. The default is 10000, which meansthat a basic block is considered hot if its execution count is greaterthan 1/10000 of the maximal execution count. 0 means that it is neverconsidered hot. Used in non-LTO mode.
hot-bb-count-ws-permille ¶The number of most executed permilles, ranging from 0 to 1000, of theprofiled execution of the entire program to which the execution countof a basic block must be part of in order to be considered hot. Thedefault is 990, which means that a basic block is considered hot ifits execution count contributes to the upper 990 permilles, or 99.0%,of the profiled execution of the entire program. 0 means that it isnever considered hot. Used in LTO mode.
hot-bb-frequency-fraction ¶The denominatorn of fraction 1/nof the execution frequency of theentry block of a function that a basic block of this function needsto at least have in order to be considered hot. The default is 1000,which means that a basic block is considered hot in a function if itis executed more frequently than 1/1000 of the frequency of the entryblock of the function. 0 means that it is never considered hot.
unlikely-bb-count-fraction ¶The denominatorn of fraction 1/nof the number of profiled runs ofthe entire program below which the execution count of a basic blockmust be in order for the basic block to be considered unlikely executed.The default is 20, which means that a basic block is considered unlikelyexecuted if it is executed in fewer than 1/20, or 5%, of the runs ofthe program. 0 means that it is always considered unlikely executed.
max-predicted-iterations ¶The maximum number of loop iterations we predict statically. This is usefulin cases where a function contains a single loop with known bound andanother loop with unknown bound.The known number of iterations is predicted correctly, whilethe unknown number of iterations average to roughly 10. This means that theloop without bounds appears artificially cold relative to the other one.
builtin-expect-probability ¶Control the probability of the expression having the specified value, asa percentage.
builtin-string-cmp-inline-length ¶The maximum length of a constant string for a builtinstrcmp ormemcmp call to be eligible for inlining.
align-threshold ¶Select fraction of the maximal frequency of executions of a basic block ina function to align the basic block.
align-loop-iterations ¶A loop expected to iterate at least the selected number of iterations isaligned.
tracer-dynamic-coverage ¶tracer-dynamic-coverage-feedbackThis value is used to limit superblock formation once the given percentage ofexecuted instructions is covered. This limits unnecessary code sizeexpansion.
Thetracer-dynamic-coverage-feedback parameteris used only when profilefeedback is available. The real profiles (as opposed to statically estimatedones) are much less balanced allowing the threshold to be a larger value.
tracer-max-code-growth ¶Stop tail duplication once code growth has reached given percentage. This isa rather artificial limit, as most of the duplicates are eliminated later incross jumping, so it may be set to much higher values than is the desired codegrowth.
tracer-min-branch-ratio ¶Stop reverse growth when the reverse probability of best edge is less than thisthreshold (as a percentage).
tracer-min-branch-probability ¶tracer-min-branch-probability-feedbackStop forward growth if the best edge has probability lower than thisthreshold.
Similarly totracer-dynamic-coverage two parameters areprovided.tracer-min-branch-probability-feedback is used forcompilation with profile feedback andtracer-min-branch-probabilitycompilation without. The value for compilation with profile feedbackneeds to be more conservative (higher) in order to make tracereffective.
stack-clash-protection-guard-size ¶Specify the size of the operating system provided stack guard as2 raised tonum bytes. Higher values may reduce thenumber of explicit probes, but a value larger than the guard providedby the operating system leaves code vulnerable to stack clash style attacks.
stack-clash-protection-probe-interval ¶Stack clash protection involves probing stack space as it is allocated.This parameter controls the maximum distance between probes into the stackas 2 raised tonum bytes.Higher values may reduce the number of explicit probes,but a value larger than the guard provided by the operating system leavescode vulnerable to stack clash style attacks.
max-cse-path-length ¶The maximum number of basic blocks on path that CSE considers.
max-cse-insns ¶The maximum number of instructions CSE processes before flushing.
ggc-min-expand ¶GCC uses a garbage collector to manage its own memory allocation. Thisparameter specifies the minimum percentage by which the garbagecollector’s heap should be allowed to expand between collections.Tuning this may improve compilation speed; it has no effect on codegeneration.
The default is 30% + 70% * (RAM/1GB) with an upper bound of 100% whenRAM >= 1GB. Ifgetrlimit is available, the notion of “RAM” isthe smallest of actual RAM andRLIMIT_DATA orRLIMIT_AS. IfGCC is not able to calculate RAM on a particular platform, the lowerbound of 30% is used. Setting this parameter andggc-min-heapsize to zero causes a full collection to occur atevery opportunity. This is extremely slow, but can be useful fordebugging.
ggc-min-heapsize ¶Minimum size of the garbage collector’s heap before it begins botheringto collect garbage. The first collection occurs after the heap expandsbyggc-min-expand% beyondggc-min-heapsize. Again,tuning this may improve compilation speed, and has no effect on codegeneration.
The default is the smaller of RAM/8,RLIMIT_RSS, or a limit thattries to ensure thatRLIMIT_DATA orRLIMIT_AS are not exceeded,but with a lower bound of 4096 (four megabytes) and an upper bound of131072 (128 megabytes). If GCC is not able to calculate RAM on aparticular platform, the lower bound is used. Setting this parametervery large effectively disables garbage collection. Setting thisparameter andggc-min-expand to zero causes a full collectionto occur at every opportunity.
max-reload-search-insns ¶The maximum number of instruction reload should look backward for equivalentregister. Increasing values mean more aggressive optimization, making thecompilation time increase with probably slightly better performance.
max-cselib-memory-locations ¶The maximum number of memory locations cselib should take into account.Increasing values mean more aggressive optimization, making the compilationtime increase with probably slightly better performance.
max-sched-ready-insns ¶The maximum number of instructions ready to be issued the scheduler shouldconsider at any given time during the first scheduling pass. Increasingvalues mean more thorough searches, making the compilation time increasewith probably little benefit.
max-sched-region-blocks ¶The maximum number of blocks in a region to be considered forinterblock scheduling.
max-pipeline-region-blocks ¶The maximum number of blocks in a region to be considered forpipelining in the selective scheduler.
max-sched-region-insns ¶The maximum number of insns in a region to be considered forinterblock scheduling.
max-pipeline-region-insns ¶The maximum number of insns in a region to be considered forpipelining in the selective scheduler.
min-spec-prob ¶The minimum probability (as a percentage) of reaching a source blockfor interblock speculative scheduling.
max-sched-extend-regions-iters ¶The maximum number of iterations through CFG to extend regions.A value of 0 disables region extensions.
max-sched-insn-conflict-delay ¶The maximum conflict delay for an insn to be considered for speculative motion.
sched-spec-prob-cutoff ¶The minimal probability of speculation success (as a percentage), so thatspeculative insns are scheduled.
sched-state-edge-prob-cutoff ¶The minimum probability an edge must have for the scheduler to save itsstate across it.
sched-mem-true-dep-cost ¶Minimal distance (in CPU cycles) between store and load targeting samememory locations.
selsched-max-lookahead ¶The maximum size of the lookahead window of selective scheduling. It is adepth of search for available instructions.
selsched-max-sched-times ¶The maximum number of times that an instruction is scheduled duringselective scheduling. This is the limit on the number of iterationsthrough which the instruction may be pipelined.
selsched-insns-to-rename ¶The maximum number of best instructions in the ready list that are consideredfor renaming in the selective scheduler.
sms-min-sc ¶The minimum value of stage count that swing modulo schedulergenerates.
max-last-value-rtl ¶The maximum size, measured as number of RTLs, that can be recorded in anexpression in the combiner for a pseudo-register as the last known value ofthat register.
max-combine-insns ¶The maximum number of instructions the RTL combiner tries to combine.
max-combine-search-insns ¶The maximum number of instructions that the RTL combiner searches in orderto find the next use of a given register definition. If this limit is reachedwithout finding such a use, the combiner stops trying to optimize thedefinition.
Currently this limit only applies after certain successful combinationattempts, but it could be extended to other cases in future.
integer-share-limit ¶Small integer constants can use a shared data structure, reducing thecompiler’s memory usage and increasing its speed. This sets the maximumvalue of a shared integer constant.
ssp-buffer-size ¶The minimum size of buffers (i.e. arrays) that receive stack smashingprotection when-fstack-protector is used.
min-size-for-stack-sharing ¶The minimum size of variables taking part in stack slot sharing when notoptimizing.
max-jump-thread-duplication-stmts ¶Maximum number of statements allowed in a block that needs to beduplicated when threading jumps.
max-jump-thread-paths ¶The maximum number of paths to consider when searching for jump threadingopportunities. When arriving at a block, incoming edges are only consideredif the number of paths to be searched so far multiplied by the number ofincoming edges does not exhaust the specified maximum number of paths toconsider.
max-fields-for-field-sensitive ¶Maximum number of fields in a structure treated ina field-sensitive manner during pointer analysis.
prefetch-latency ¶Estimate on average number of instructions that are executed beforeprefetch finishes. The distance prefetched ahead is proportionalto this constant. Increasing this number may also lead to lessstreams being prefetched (seesimultaneous-prefetches).
simultaneous-prefetches ¶Maximum number of prefetches that can run at the same time.
l1-cache-line-size ¶The size of cache line in L1 data cache, in bytes.
l1-cache-size ¶The size of L1 data cache, in kilobytes.
l2-cache-size ¶The size of L2 data cache, in kilobytes.
prefetch-dynamic-strides ¶Whether the loop array prefetch pass should issue software prefetch hintsfor strides that are non-constant. In some cases this may bebeneficial, though the fact the stride is non-constant may make ithard to predict when there is clear benefit to issuing these hints.
Set to 1 if the prefetch hints should be issued for non-constantstrides. Set to 0 if prefetch hints should be issued only for strides thatare known to be constant and belowprefetch-minimum-stride.
prefetch-minimum-stride ¶Minimum constant stride, in bytes, to start using prefetch hints for. Ifthe stride is less than this threshold, prefetch hints are not issued.
This setting is useful for processors that have hardware prefetchers, inwhich case there may be conflicts between the hardware prefetchers andthe software prefetchers. If the hardware prefetchers have a maximumstride they can handle, it should be used here to improve the use ofsoftware prefetchers.
A value of -1 means we don’t have a threshold and thereforeprefetch hints can be issued for any constant stride.
This setting is only useful for strides that are known and constant.
destructive-interference-size ¶constructive-interference-sizeThe values for the C++17 variablesstd::hardware_destructive_interference_size andstd::hardware_constructive_interference_size. The destructiveinterference size is the minimum recommended offset between twoindependent concurrently-accessed objects; the constructiveinterference size is the maximum recommended size of contiguous memoryaccessed together. Typically both are the size of an L1 cacheline for the target, in bytes. For a generic target covering a range of L1cache line sizes, typically the constructive interference size isthe small end of the range and the destructive size is the largeend.
The destructive interference size is intended to be used for layout,and thus has ABI impact. The default value is not expected to bestable, and on some targets varies with-mtune, so use ofthis variable in a context where ABI stability is important, such asthe public interface of a library, is strongly discouraged; if it isused in that context, users can stabilize the value using thisoption.
The constructive interference size is less sensitive, as it istypically only used in a ‘static_assert’ to make sure that a typefits within a cache line.
See also-Winterference-size.
loop-interchange-max-num-stmts ¶The maximum number of stmts in a loop to be interchanged.
loop-interchange-stride-ratio ¶The minimum ratio between stride of two loops for interchange to be profitable.
min-insn-to-prefetch-ratio ¶The minimum ratio between the number of instructions and thenumber of prefetches to enable prefetching in a loop.
prefetch-min-insn-to-mem-ratio ¶The minimum ratio between the number of instructions and thenumber of memory references to enable prefetching in a loop.
use-canonical-types ¶Whether the compiler should use the “canonical” type system.Should always be 1, which uses a more efficient internalmechanism for comparing types in C++ and Objective-C++. However, ifbugs in the canonical type system are causing compilation failures,set this value to 0 to disable canonical types.
switch-conversion-max-branch-ratio ¶Switch initialization conversion refuses to create arrays that arebigger thanswitch-conversion-max-branch-ratio times the number ofbranches in the switch.
max-partial-antic-length ¶Maximum length of the partial antic set computed during the treepartial redundancy elimination optimization (-ftree-pre) whenoptimizing at-O3 and above. For some sorts of source codethe enhanced partial redundancy elimination optimization can run away,consuming all of the memory available on the host machine. Thisparameter sets a limit on the length of the sets that are computed,which prevents the runaway behavior. Setting a value of 0 forthis parameter allows an unlimited set length.
rpo-vn-max-loop-depth ¶Maximum loop depth that is value-numbered optimistically.When the limit hits the innermostrpo-vn-max-loop-depth loops and the outermost loop in theloop nest are value-numbered optimistically and the remaining ones not.
sccvn-max-alias-queries-per-access ¶Maximum number of alias-oracle queries we perform when looking forredundancies for loads and stores. If this limit is hit the searchis aborted and the load or store is not considered redundant. Thenumber of queries is algorithmically limited to the number ofstores on all paths from the load to the function entry.
ira-max-loops-num ¶IRA uses regional register allocation by default. If a functioncontains more loops than the number given by this parameter, only at mostthe given number of the most frequently-executed loops form regionsfor regional register allocation.
ira-max-conflict-table-size ¶Although IRA uses a sophisticated algorithm to compress the conflicttable, the table can still require excessive amounts of memory forhuge functions. If the conflict table for a function could be morethan the size in MB given by this parameter, the register allocatorinstead uses a faster, simpler, and lower-qualityalgorithm that does not require building a pseudo-register conflict table.
ira-loop-reserved-regs ¶IRA can be used to evaluate more accurate register pressure in loopsfor decisions to move loop invariants (see-O3). The numberof available registers reserved for some other purposes is givenby this parameter. The default value of the parameteris the best found from numerous experiments.
ira-consider-dup-in-all-alts ¶Make IRA to consider matching constraint (duplicated operand number)heavily in all available alternatives for preferred register class.If it is set as zero, it means IRA only respects the matchingconstraint when it’s in the only available alternative with anappropriate register class. Otherwise, it means IRA checks allavailable alternatives for preferred register class even if it hasfound some choice with an appropriate register class that satisfies thefound qualified matching constraint.
ira-simple-lra-insn-threshold ¶Approximate function insn number in 1K units triggering simple local RA.
lra-inheritance-ebb-probability-cutoff ¶LRA tries to reuse values reloaded in registers in subsequent insns.This optimization is called inheritance. EBB is used as a region todo this optimization. The parameter defines a minimal fall-throughedge probability (as a percentage) used to add BB to inheritance EBB inLRA. The default value was chosenfrom numerous runs of SPEC2000 on x86-64.
loop-invariant-max-bbs-in-loop ¶Loop invariant motion can be very expensive, both in compilation time andin amount of needed compile-time memory, with very large loops. Loopswith more basic blocks than this parameter won’t have loop invariantmotion optimization performed on them.
loop-max-datarefs-for-datadeps ¶Building data dependencies is expensive for very large loops. Thisparameter limits the number of data references in loops that areconsidered for data dependence analysis. These large loops are nohandled by the optimizations using loop data dependencies.
max-vartrack-size ¶Sets a maximum number of hash table slots to use during variabletracking dataflow analysis of any function. If this limit is exceededwith variable tracking at assignments enabled, analysis for thatfunction is retried without it, after removing all debug insns fromthe function. If the limit is exceeded even without debug insns, vartracking analysis is completely disabled for the function. Settingthe parameter to zero makes it unlimited.
max-vartrack-expr-depth ¶Sets a maximum number of recursion levels when attempting to mapvariable names or debug temporaries to value expressions. This tradescompilation time for more complete debug information. If this is set toolow, value expressions that are available and could be represented indebug information may end up not being used; setting this higher mayenable the compiler to find more complex debug expressions, but compiletime and memory use may grow.
max-debug-marker-count ¶Sets a threshold on the number of debug markers (e.g. begin stmtmarkers) to avoid complexity explosion at inlining or expanding to RTL.If a function has more such gimple stmts than the set limit, such stmtsare dropped from the inlined copy of a function and from its RTLexpansion.
min-nondebug-insn-uid ¶Use uids starting at this parameter for nondebug insns. The range belowthe parameter is reserved exclusively for debug insns created by-fvar-tracking-assignments, but debug insns may get(non-overlapping) uids above it if the reserved range is exhausted.
ipa-sra-deref-prob-threshold ¶IPA-SRA replaces a pointer that is known not be NULL with one or morenew parameters only when the probability (as a percentage, relative tofunction entry) of it being dereferenced is higher than this parameter.
ipa-sra-ptr-growth-factor ¶IPA-SRA replaces a pointer to an aggregate with one or more newparameters only when their cumulative size is less or equal toipa-sra-ptr-growth-factor times the size of the originalpointer parameter.
ipa-sra-ptrwrap-growth-factor ¶Additional maximum allowed growth of total size of new parametersthat ipa-sra replaces a pointer to an aggregate with,if it points to a local variable that the caller only writes to andpasses it as an argument to other functions.
ipa-sra-max-replacements ¶Maximum pieces of an aggregate that IPA-SRA tracks. As aconsequence, it is also the maximum number of replacements of a formalparameter.
sra-max-scalarization-size-Ospeed ¶sra-max-scalarization-size-OsizeThe two Scalar Reduction of Aggregates passes (SRA and IPA-SRA) aim toreplace scalar parts of aggregates with uses of independent scalarvariables. These parameters control the maximum size, in storage units,of aggregates that are considered for replacement when compiling forspeed(sra-max-scalarization-size-Ospeed) or size(sra-max-scalarization-size-Osize) respectively.
sra-max-propagations ¶The maximum number of artificial accesses that Scalar Replacement ofAggregates (SRA) tracks, per one local variable, in order tofacilitate copy propagation.
tm-max-aggregate-size ¶When making copies of thread-local variables in a transaction, thisparameter specifies the size in bytes after which variables aresaved with the logging functions as opposed to save/restore codesequence pairs. This option only applies when using-fgnu-tm.
graphite-max-nb-scop-params ¶To avoid exponential effects in the Graphite loop transforms, thenumber of parameters in a Static Control Part (SCoP) is bounded.A value of zero can be used to liftthe bound. A variable whose value is unknown at compilation time anddefined outside a SCoP is a parameter of the SCoP.
hardcfr-max-blocks ¶Disable-fharden-control-flow-redundancy for functions with alarger number of blocks than the specified value. Zero removes anylimit.
hardcfr-max-inline-blocks ¶Force-fharden-control-flow-redundancy to use out-of-linechecking for functions with a larger number of basic blocks than thespecified value.
loop-block-tile-size ¶Loop blocking or strip mining transforms, enabled with-floop-block or-floop-strip-mine, strip mine eachloop in the loop nest by a given number of iterations. The striplength can be changed using theloop-block-tile-sizeparameter.
ipa-jump-function-lookups ¶Specifies number of statements visited during jump function offset discovery.
ipa-cp-value-list-size ¶IPA-CP attempts to track all possible values and types passed to a function’sparameter in order to propagate them and perform devirtualization.ipa-cp-value-list-size is the maximum number of values and types itstores per one formal parameter of a function.
ipa-cp-eval-threshold ¶IPA-CP calculates its own score of cloning profitability heuristicsand performs those cloning opportunities with scores that exceedipa-cp-eval-threshold.
ipa-cp-max-recursive-depth ¶Maximum depth of recursive cloning for self-recursive function.
ipa-cp-min-recursive-probability ¶Recursive cloning only when the probability of call being executed exceedsthe parameter.
ipa-cp-recursive-freq-factor ¶The number of times interprocedural copy propagation expects recursivefunctions to call themselves.
ipa-cp-recursion-penalty ¶Percentage penalty the recursive functions receive when theyare evaluated for cloning.
ipa-cp-single-call-penalty ¶Percentage penalty functions containing a single call to anotherfunction receive when they are evaluated for cloning.
ipa-cp-sweeps ¶The number of times the interprocedural constant propagation traversesall functions to make cloning decisions.
ipa-max-agg-items ¶IPA-CP is also capable of propagating a number of scalar values passedin an aggregate.ipa-max-agg-items controls the maximumnumber of such values per one parameter.
ipa-cp-loop-hint-bonus ¶When IPA-CP determines that a cloning candidate would make the numberof iterations of a loop known, it adds a bonus ofipa-cp-loop-hint-bonus to the profitability score ofthe candidate.
ipa-max-loop-predicates ¶The maximum number of different predicates IPA uses to describe whenloops in a function have known properties.
ipa-max-aa-steps ¶During its analysis of function bodies, IPA-CP employs alias analysisin order to track values pointed to by function parameters. In ordernot spend too much time analyzing huge functions, it gives up andconsider all memory clobbered after examiningipa-max-aa-steps statements modifying memory.
ipa-max-switch-predicate-bounds ¶Maximal number of boundary endpoints of case ranges of a switch statement.For switch exceeding this limit, IPA-CP does not construct a cloning costpredicate, which is used to estimate cloning benefit, for the default caseof the switch statement.
ipa-max-param-expr-ops ¶IPA-CP analyzes conditional statements that reference some functionparameter to estimate benefit for cloning upon certain constant value.But if number of operations in a parameter expression exceedsipa-max-param-expr-ops, the expression is treated as complicated,and is not handled by IPA analysis.
lto-partitions ¶Specify desired number of partitions produced during WHOPR compilation.The number of partitions should exceed the number of CPUs used for compilation.
lto-min-partition ¶Minimum partition size for WHOPR (in estimated instructions).This prevents expenses of splitting very small programs into too manypartitions.
lto-max-partition ¶Maximum partition size for WHOPR (in estimated instructions).to provide an upper bound for individual size of partition.Meant to be used only with balanced partitioning.
lto-partition-locality-frequency-cutoff ¶The denominatorn of fraction 1/n of the execution frequency ofthe callee to be cloned for a particular caller.The special value of 0 dictates to always clonewithout a cut-off.
lto-partition-locality-size-cutoff ¶Size cut-off for callee including inlined calls to be cloned for a particularcaller.
lto-max-locality-partition ¶Maximal size of a locality partition for LTO (in estimated instructions).Value of 0 results in default value being used.
lto-max-streaming-parallelism ¶Maximal number of parallel processes used for LTO streaming.
cxx-max-namespaces-for-diagnostic-help ¶The maximum number of namespaces to consult for suggestions when C++name lookup fails for an identifier.
sink-frequency-threshold ¶The maximum relative execution frequency (as a percentage) of the target blockrelative to a statement’s original block to allow statement sinking of astatement. Larger numbers result in more aggressive statement sinking.A small positive adjustment is applied forstatements with memory operands as those are even more profitable to sink.
max-stores-to-sink ¶The maximum number of conditional store pairs that can be sunk. Set to 0if either vectorization (-ftree-vectorize) or if-conversion(-ftree-loop-if-convert) is disabled.
case-values-threshold ¶The smallest number of different values for which it is best to use ajump table instead of a tree of conditional branches. If the value is0, use the default for the machine.
jump-table-max-growth-ratio-for-size ¶The maximum code size growth ratio when expandinginto a jump table (as a percentage). The parameter is used whenoptimizing for size.
jump-table-max-growth-ratio-for-speed ¶The maximum code size growth ratio when expandinginto a jump table (as a percentage). The parameter is used whenoptimizing for speed.
tree-reassoc-width ¶In the tree reassociation pass, set the maximum number of instructionsexecuted in parallel in the reassociated tree.This parameter overrides target-dependentheuristics used by default if it has a nonzero value.
sched-pressure-algorithm ¶Choose between the two available implementations of-fsched-pressure. Algorithm 1 is the original implementationand is the more likely to prevent instructions from being reordered.Algorithm 2 was designed to be a compromise between the relativelyconservative approach taken by algorithm 1 and the rather aggressiveapproach taken by the default scheduler. It relies more heavily onhaving a regular register file and accurate register pressure classes.Seehaifa-sched.cc in the GCC sources for more details.
The default choice depends on the target.
max-slsr-cand-scan ¶Set the maximum number of existing candidates that are considered whenseeking a basis for a new straight-line strength reduction candidate.
asan-globals ¶Enable buffer overflow detection for global objects. This kindof protection is enabled by default if you are using-fsanitize=address option.To disable global objects protection use--param asan-globals=0.
asan-stack ¶Enable buffer overflow detection for stack objects. This kind ofprotection is enabled by default when using-fsanitize=address.To disable stack protection use--param asan-stack=0 option.
asan-instrument-reads ¶Enable buffer overflow detection for memory reads. This kind ofprotection is enabled by default when using-fsanitize=address.To disable memory reads protection use--param asan-instrument-reads=0.
asan-instrument-writes ¶Enable buffer overflow detection for memory writes. This kind ofprotection is enabled by default when using-fsanitize=address.To disable memory writes protection use--param asan-instrument-writes=0 option.
asan-memintrin ¶Enable detection for built-in functions. This kind of protectionis enabled by default when using-fsanitize=address.To disable built-in functions protection use--param asan-memintrin=0.
asan-use-after-return ¶Enable detection of use-after-return. This kind of protectionis enabled by default when using the-fsanitize=address option.To disable it use--param asan-use-after-return=0.
Note: By default the check is disabled at run time. To enable it,adddetect_stack_use_after_return=1 to the environment variableASAN_OPTIONS.
asan-instrumentation-with-call-threshold ¶If number of memory accesses in function being instrumentedis greater or equal to this number, use callbacks instead of inline checks.E.g. to disable inline code use--param asan-instrumentation-with-call-threshold=0.
asan-kernel-mem-intrinsic-prefix ¶If nonzero, prefix calls tomemcpy,memset andmemmovewith ‘__asan_’ or ‘__hwasan_’for-fsanitize=kernel-address or ‘-fsanitize=kernel-hwaddress’,respectively.
hwasan-instrument-stack ¶Enable hwasan instrumentation of statically-sized stack-allocated variables.This kind of instrumentation is enabled by default when using-fsanitize=hwaddress and disabled by default when using-fsanitize=kernel-hwaddress.To disable stack instrumentation use--param hwasan-instrument-stack=0, and to enable it use--param hwasan-instrument-stack=1.
hwasan-random-frame-tag ¶When using stack instrumentation, decide tags for stack variables using adeterministic sequence beginning at a random tag for each frame. With thisparameter unset tags are chosen using the same sequence but beginning from 1.This is enabled by default for-fsanitize=hwaddress and unavailablefor-fsanitize=kernel-hwaddress and-fsanitize=memtag-stack.To disable it use--param hwasan-random-frame-tag=0.
hwasan-instrument-allocas ¶Enable hwasan instrumentation of dynamically sized stack-allocated variables.This kind of instrumentation is enabled by default when using-fsanitize=hwaddress and disabled by default when using-fsanitize=kernel-hwaddress.To disable instrumentation of such variables use--param hwasan-instrument-allocas=0, and to enable it use--param hwasan-instrument-allocas=1.
hwasan-instrument-reads ¶Enable hwasan checks on memory reads. Instrumentation of reads is enabled bydefault for both-fsanitize=hwaddress and-fsanitize=kernel-hwaddress.To disable checking memory reads use--param hwasan-instrument-reads=0.
hwasan-instrument-writes ¶Enable hwasan checks on memory writes. Instrumentation of writes is enabled bydefault for both-fsanitize=hwaddress and-fsanitize=kernel-hwaddress.To disable checking memory writes use--param hwasan-instrument-writes=0.
hwasan-instrument-mem-intrinsics ¶Enable hwasan instrumentation of builtin functions. Instrumentation of thesebuiltin functions is enabled by default for both-fsanitize=hwaddressand-fsanitize=kernel-hwaddress.To disable instrumentation of builtin functions use--param hwasan-instrument-mem-intrinsics=0.
memtag-instrument-allocas ¶Enable hardware-assisted memory tagging of dynamically sized stack-allocatedvariables. This kind of code generation is enabled by default when using-fsanitize=memtag-stack.
memtag-instrument-mem-intrinsics ¶When sanitizing using MTE instructions, include builtin functions.
use-after-scope-direct-emission-threshold ¶If the size of a local variable in bytes is smaller or equal to thisnumber, directly poison (or unpoison) shadow memory instead of usingrun-time callbacks.
tsan-distinguish-volatile ¶Emit special instrumentation for accesses to volatiles.
tsan-instrument-func-entry-exit ¶Emit instrumentation calls to__tsan_func_entry() and __tsan_func_exit().
max-fsm-thread-path-insns ¶Maximum number of instructions to copy when duplicating blocks on afinite state automaton jump thread path.
threader-debug ¶Enables verbose dumping of the threader solver. This parameter has twospecial values, ‘none’ and ‘all’.
parloops-chunk-size ¶Chunk size of OpenMP schedule for loops parallelized by parloops.
parloops-schedule ¶Schedule type of OpenMP schedule for loops parallelized by parloops (static,dynamic, guided, auto, runtime).
parloops-min-per-thread ¶The minimum number of iterations per thread of an innermost parallelizedloop for which the parallelized variant is preferred over the single threadedone. Note that for a parallelized loop nest theminimum number of iterations of the outermost loop per thread is two.
max-ssa-name-query-depth ¶Maximum depth of recursion when querying properties of SSA names in thingslike fold routines. One level of recursion corresponds to following ause-def chain.
max-speculative-devirt-maydefs ¶The maximum number of may-defs we analyze when looking for a must-defspecifying the dynamic type of an object that invokes a virtual callwe may be able to devirtualize speculatively.
ranger-debug ¶Specifies the type of debug output to be issued for ranges.
unroll-jam-min-percent ¶The minimum percentage of memory references that must be optimizedaway for the unroll-and-jam transformation to be considered profitable.
unroll-jam-max-unroll ¶The maximum number of times the outer loop should be unrolled bythe unroll-and-jam transformation.
max-rtl-if-conversion-unpredictable-cost ¶Maximum permissible cost for the sequence that would be generatedby the RTL if-conversion pass for a branch that is considered unpredictable.
max-variable-expansions-in-unroller ¶If-fvariable-expansion-in-unroller is used, the maximum numberof times that an individual variable is expanded during loop unrolling.
partial-inlining-entry-probability ¶Maximum probability of the entry BB of split region(as a percentage relative to entry BB of the function)to make partial inlining happen.
max-tracked-strlens ¶Maximum number of strings for which the strlen optimization passtracks string lengths.
gcse-after-reload-partial-fraction ¶The threshold ratio for performing partial redundancyelimination after reload.
gcse-after-reload-critical-fraction ¶The threshold ratio of critical edges execution count thatpermit performing redundancy elimination after reload.
max-loop-header-insns ¶The maximum number of insns allowed in a loop header duplicatedby the copy loop headers pass.
vect-epilogues-nomask ¶If nonzero, enable loop epilogue vectorization using smaller vector size.
vect-partial-vector-usage ¶Controls when the loop vectorizer considers using partial vector loadsand stores as an alternative to falling back to scalar code. 0 stopsthe vectorizer from ever using partial vector loads and stores. 1 allowspartial vector loads and stores if vectorization removes the need for thecode to iterate. 2 allows partial vector loads and stores in all loops.The parameter only has an effect on targets that support partialvector loads and stores.
vect-inner-loop-cost-factor ¶The maximum factor that the loop vectorizer applies to the cost of statementsin an inner loop relative to the loop being vectorized. The factor appliedis the maximum of the estimated number of iterations of the inner loop andthis parameter. The default value of this parameter is 50.
vect-induction-float ¶Enable loop vectorization of floating-point inductions.
vect-scalar-cost-multiplier ¶Apply the given multiplier percentage to scalar loop costing duringvectorization.Increasing the cost multiplier makes vector loops more profitable.
vrp-block-limit ¶Maximum number of basic blocks before value range propagationswitches to a simpler algorithm that uses less memory.
vrp-cstload-limit ¶Maximum number of steps when inferring a value range from a load froma constant aggregate.
vrp-sparse-threshold ¶Maximum number of basic blocks before value range propagationuses a sparse bitmap cache.
vrp-switch-limit ¶Maximum number of outgoing edges in a switch to allow it to be processedby value range propagation.
vrp-vector-threshold ¶Maximum number of basic blocks for value range propagation touse a basic cache vector.
avoid-fma-max-bits ¶Maximum number of bits for which we avoid creating FMAs.
fully-pipelined-fma ¶Whether the target fully pipelines FMA instructions. If non-zero,reassociation considers the benefit of parallelizing FMA’s multiplicationpart and addition part, assuming FMUL and FMA use the same units that canalso do FADD.
sms-loop-average-count-threshold ¶A threshold on the average loop count considered by the swing modulo scheduler.
sms-dfa-history ¶The number of cycles the swing modulo scheduler considers when checkingconflicts using DFA.
graphite-allow-codegen-errors ¶Whether codegen errors should be ICEs when-fchecking.
sms-max-ii-factor ¶A factor for tuning the upper bound that the swing modulo scheduleruses for scheduling a loop.
lra-max-considered-reload-pseudos ¶The maximum number of reload pseudos that are considered duringspilling a non-reload pseudo.
lra-max-pseudos-points-log2-considered-for-preferences ¶The maximumlog2(number of reload pseudos * number ofprogram points) threshold when preferences for other reload pseudosare still considered. Taking these preferences into account helps toimprove register allocation. However, for very large functions, alarge value can result in significant compilation time and memoryconsumption. The default value is 30.
max-pow-sqrt-depth ¶Maximum depth of square root chains to use when synthesizing exponentiationby a real constant.
max-dse-active-local-stores ¶Maximum number of active local stores in RTL dead store elimination.
asan-instrument-allocas ¶Enable asanalloca/VLA protection.
max-iterations-computation-cost ¶Bound on the cost of an expression to compute the number of iterationsin the doloop optimizer.
max-isl-operations ¶Maximum number of isl operations, 0 means unlimited.
graphite-max-arrays-per-scop ¶Maximum number of arrays per SCoP.
max-vartrack-reverse-op-size ¶Maximum size of variable tracking loc list for which reverse ops shouldbe added.
fsm-scale-path-stmts ¶Scale factor to apply to the number of statements in a threading pathcrossing a loop back edge when comparing to--param=max-jump-thread-duplication-stmts.
uninit-control-dep-attempts ¶Maximum number of nested calls to search for control dependenciesduring uninitialized variable analysis.
uninit-max-chain-len ¶Maximum number of predicates and-ed for each predicate or-ed in the normalizedpredicate chain.
uninit-max-num-chains ¶Maximum number of predicates or-ed in the normalized predicate chain.
uninit-max-prune-work ¶Maximum amount of work done to prune paths where the variable is alwaysinitialized.
sched-autopref-queue-depth ¶Hardware autoprefetcher scheduler model control flag.Number of lookahead cycles the model looks into; a value of 0only enables the instruction sorting heuristic.
loop-versioning-max-inner-insns ¶The maximum number of instructions that an inner loop can havebefore the loop versioning pass considers it too big to copy.
loop-versioning-max-outer-insns ¶The maximum number of instructions that an outer loop can havebefore the loop versioning pass considers it too big to copy,discounting any instructions in inner loops that directly benefitfrom versioning.
ssa-name-def-chain-limit ¶The maximum number of SSA_NAME assignments to follow in determininga property of a variable such as its value. This limits the numberof iterations or recursive calls GCC performs when optimizing certainstatements or when determining their validity prior to issuingdiagnostics.
store-merging-max-size ¶Maximum size of a single store merging region in bytes.
store-forwarding-max-distance ¶Maximum number of instruction distance that a small store forwarded to a largerload may stall. A value of 0 disables the cost checks for theavoid-store-forwarding pass.
hash-table-verification-limit ¶The number of elements for which hash table verification is donefor each searched element.
max-find-base-term-values ¶Maximum number of VALUEs handled during a singlefind_base_term call.
analyzer-max-enodes-per-program-point ¶The maximum number of exploded nodes per program point withinthe analyzer, before terminating analysis of that point.
analyzer-max-constraints ¶The maximum number of constraints per state.
analyzer-min-snodes-for-call-summary ¶The minimum number of supernodes within a function for theanalyzer to consider summarizing its effects at call sites.
analyzer-max-enodes-for-full-dump ¶The maximum depth of exploded nodes that should appear in a dot dumpbefore switching to a less verbose format.
analyzer-max-recursion-depth ¶The maximum number of times a callsite can appear in a call stackwithin the analyzer, before terminating analysis of a call that wouldrecurse deeper.
analyzer-max-svalue-depth ¶The maximum depth of a symbolic value, before approximatingthe value as unknown.
analyzer-max-infeasible-edges ¶The maximum number of infeasible edges to reject before declaringa diagnostic as infeasible.
gimple-fe-computed-hot-bb-threshold ¶The number of executions of a basic block that is considered hot.The parameter is used only in GIMPLE FE.
analyzer-bb-explosion-factor ¶The maximum number of “after supernode” exploded nodes within the analyzerper supernode, before terminating analysis.
analyzer-text-art-string-ellipsis-threshold ¶The number of bytes at which to ellipsize string literals in analyzer textart diagrams.
analyzer-text-art-ideal-canvas-width ¶The ideal width in characters of text art diagrams generated by the analyzer.
analyzer-text-art-string-ellipsis-head-len ¶The number of literal bytes to show at the head of a string literal in textart when ellipsizing it.
analyzer-text-art-string-ellipsis-tail-len ¶The number of literal bytes to show at the tail of a string literal in textart when ellipsizing it.
ranger-logical-depth ¶Maximum depth of logical expression evaluation ranger looks throughwhen evaluating outgoing edge ranges.
ranger-recompute-depth ¶Maximum depth of instruction chains to consider for recomputationin the outgoing range calculator.
relation-block-limit ¶Maximum number of relations the dominator tree oracle registers in abasic block during value range relational processing.
transitive-relations-work-bound ¶Work bound when discovering transitive relations from existing relationsin value range relational processing.
min-pagesize ¶Minimum page size for warning and early break vectorization purposes.
openacc-kernels ¶Specify mode of OpenACCkernels constructs handling.With--param=openacc-kernels=decompose, OpenACCkernelsconstructs are decomposed into parts, a sequence of computeconstructs, each then handled individually.This is work in progress.With--param=openacc-kernels=parloops, OpenACCkernelsconstructs are handled by the ‘parloops’ pass, en bloc.This is the current default.
openacc-privatization ¶Control whether the-fopt-info-omp-note and applicable-fdump-tree-*-details options emit OpenACC privatization diagnostics.With--param=openacc-privatization=quiet, don’t diagnose.This is the current default.With--param=openacc-privatization=noisy, do diagnose.
cycle-accurate-model ¶Specifies whether GCC should assume that the scheduling description is mostlya cycle-accurate model of the target processor the code is intended torun on, in the absence of cache misses. Nonzero means that the selectedscheduling model is accurate and likely describes an in-order processor,and that scheduling should aggressively spill to try and fill any pipelinebubbles. This is the current default. Zero means the scheduling descriptionmight not be available/accurate or perhaps not applicable at all, such as formodern out-of-order processors.
Next:Target-Specific Parameters, Up:Parameters [Contents][Index]