Rate this Page

Struct Node#

Nested Relationships#

Nested Types#

Inheritance Relationships#

Base Type#

  • publicstd::enable_shared_from_this<Node>

Derived Types#

Struct Documentation#

structNode:publicstd::enable_shared_from_this<Node>#

Subclassed bytorch::autograd::CppNode< T >,torch::autograd::TraceableFunction

Public Functions

inlineexplicitNode(uint64_tsequence_nr,edge_list&&next_edges=edge_list())#

Construct a newNode with the givennext_edges

inlineexplicitNode(edge_list&&next_edges=edge_list())#
Node(constNode&other)=delete#

Nodes are neither copyable nor moveable.

Node(Node&&other)=delete#
Node&operator=(constNode&other)=delete#
Node&operator=(Node&&other)=delete#
virtual~Node()=default#
inlinestd::shared_ptr<Node>getptr()#
inlinevariable_listoperator()(variable_list&&inputs)#

Evaluates the function on the given inputs and returns the result of the function call.

inlineuint32_tadd_input_metadata(constat::TensorOptions&options,c10::SymIntArrayRefshape,boolis_tensor_subclass,boolis_nested,std::optional<at::ScalarType>grad_dtype)noexcept#

Adds the type and shape metadata for a new input.

Returns the index of of the new input.

inlineuint32_tadd_input_metadata(constat::Tensor&t)noexcept#
inlineuint32_tadd_input_metadata(undefined_inputu)noexcept#

Adds a placeholder for an input that will not be used.

inlineuint32_tnum_inputs()constnoexcept#
inlineconstInputMetadata&input_metadata(size_tindex)const#
inlineInputMetadata&mutable_input_metadata(size_tindex)#
inlinestd::optional<c10::Stream>stream()#

Note:Function Streams A function’s stream (for a given device type) is the stream of the first element of its input buffer on a device of that type.

If all elements are on the same device they MUST share a stream. If elements are on different devices (across multiple GPUs, for example) they may have different streams.

inlineat::Devicedevice()#
inlinevoidclear_input_metadata()#
inlinevoidupdate_topological_nr(constEdge&edge)#
inlinevoidset_next_edge(size_tindex,Edgeedge)#
inlinevoidadd_next_edge(Edgeedge)#
inlinevoidset_next_edges(edge_list&&next_edges)#
inlineconstEdge&next_edge(size_tindex)constnoexcept#
inlineconstedge_list&next_edges()constnoexcept#
inlineedge_list&next_edges()noexcept#
inlineuint32_tnum_outputs()constnoexcept#
inlineuint64_tsequence_nr()constnoexcept#

NOTE [ Sequence Number].

The sequence_nr has two main usages in autograd:

1) Helps determine the node’s execution priority in the engine. All else being equal, nodes with higher priority numbers are executed first. Thus, nodes corresponding to ops executed later are the first to be executed in the backward pass. One caveat is that we prioritize AccumulateGrad nodes by explicitly setting its sequence_nr to be UINT64_MAX. 2) The sequence number of thisNode is paired with with thread_id it was created in as a unique identifier by the profiler to annotate recorded events. The purpose of this is to help users (and possibly programs) interpreting the profiler’s output to correlate backward nodes with its forward ops. We need both sequence_nr and thread_id to identify a node because sequence_nr is thread_local, i.e., starts counting up from zero in a new thread

inlinevoidset_sequence_nr(uint64_tsequence_nr)#
inlineuint64_ttopological_nr()constnoexcept#
voidassign_parent()#
inlineuint64_tthread_id()constnoexcept#

Id of the thread that createdNode.

virtualstd::stringname()const#

Returns the name of the dynamic type of the function, for debugging.

inlineboolshould_compute_output(size_toutput_edge_index)const#

The difference between functionsshould_compute_output andtask_should_compute_output:

  • should_compute_output should only be used during graph construction and takes into account only requires_grad information

  • task_should_compute_output should only be called during the backward pass (unless called directly through grad_fn) and takes into account the current graph task. Specifically, the autograd engine trims unnecessary edges wheninputs are specified, and during backward untrimmed nodes left on the graph can/should checktask_should_compute_output to see if any outgoing edges have been trimmed by the engine. If that is the case, gradient computation wrt those edges can be omitted.

Returns true if the particular output edge is active, and that particular output of this function should be computed.

inlineboolshould_compute_output(std::initializer_list<IndexRange>idxs)const#

Returns true if any of the output edges in any of the ranges are active.

inlinebooltask_should_compute_output(size_toutput_edge_index)const#

Same as the aboveshould_compute_output function but will also check whether this edge is needed within the current graph task.

inlinebooltask_should_compute_output(std::initializer_list<IndexRange>idxs)const#

Returns true if any of the output edges in any of the ranges are active and should be computed in the current graph task.

inlinePyObject*pyobj()constnoexcept#

Returns thePyObject stored for thisNode (for Python interaction).

inlinevoidset_pyobj(PyObject*pyobj)noexcept#

Sets thePyObject stored for thisNode (for Python interaction).

AnomalyMetadata*metadata()noexcept#

Returns the anomaly metadata stored for thisNode.

If none exist, creates a new empty one.

inlineuintptr_tadd_post_hook(std::unique_ptr<FunctionPostHook>&&post_hook)#
inlineconststd::vector<std::unique_ptr<FunctionPostHook>>&post_hooks()constnoexcept#
inlinebooldel_post_hook(constuintptr_t&key)#
inlinestd::vector<std::unique_ptr<FunctionPostHook>>&post_hooks()noexcept#
inlinevoidadd_pre_hook(std::unique_ptr<FunctionPreHook>&&pre_hook)#
inlinevoidadd_tensor_pre_hook(std::unique_ptr<FunctionPreHook>&&pre_hook)#
inlinevoidadd_retains_grad_hook(std::unique_ptr<FunctionPreHook>&&pre_hook,size_toutput_idx)#
inlinestd::unique_ptr<FunctionPreHook>pop_retains_grad_hook(size_toutput_idx)#
inlineconststd::vector<std::unique_ptr<FunctionPreHook>>&pre_hooks()constnoexcept#
inlinestd::vector<std::unique_ptr<FunctionPreHook>>&pre_hooks()noexcept#
inlinevirtualstd::vector<std::unique_ptr<FunctionPreHook>>&tensor_pre_hooks()noexcept#
inlinevirtualstd::unique_ptr<PostAccumulateGradHook>&tensor_post_acc_grad_hooks()constnoexcept#
inlinestd::unordered_map<size_t,std::unique_ptr<FunctionPreHook>>&retains_grad_hooks()noexcept#
inlinevirtualvoidrelease_variables()#

Releases saved variables if the operation won’t be reused.

inlinevirtualvoidwill_release_variables()#

Called before an apply ifrelease_variables() is going to be called.

Allows larger ops likeInterpreterAutogradFunction to incrementally release variables as they run.

inlinevirtualboolis_traceable()#

Returns true if this function is traceable.

An op is traceable if all operations happening withinapply() are performed on autogradVariables (i.e. apply mostly instantiates and applies other functions).

inlinevirtualboolpasses_state_transparently()#

ANode is said to pass state transparently to backward, if the state consists only of (Saved)Variables and only non-variable objects that parameterize the operation in some way that defines the graph structure AND the backward function is traceable.

In particular, parametrization MUST NOT depend on the data of anyVariable. TODO: it might be possible to handle cases where backward is non-traceable but state passing could be considered transparent. This will probably depend on saved_variable_list being mutable. NOTE: this value matters only ifis_traceable() returns false.

inlinevirtualvoidcompiled_args(CompiledNodeArgs&args)const#
inlinevirtualvariable_listapply_with_saved(constvariable_list&inputs,SwapSavedVariables&saved)#
inlinevirtualboolis_aot_backward()const#

Protected Functions

virtualvariable_listapply(variable_list&&inputs)=0#

Performs theNode’s actual operation.

variable_listtraced_apply(variable_listinputs)#

Callsapply(), but instruments it with tracing machinery.

Protected Attributes

uint64_tsequence_nr_#
uint64_ttopological_nr_=0#
mutableboolhas_parent_=false#
uint64_tthread_id_=0#
std::mutexmutex_#
edge_listnext_edges_#
PyObject*pyobj_=nullptr#
std::unique_ptr<AnomalyMetadata>anomaly_metadata_=nullptr#
std::vector<std::unique_ptr<FunctionPreHook>>pre_hooks_#
std::vector<std::unique_ptr<FunctionPreHook>>tensor_pre_hooks_#
std::unordered_map<size_t,std::unique_ptr<FunctionPreHook>>retains_grad_hooks_#
std::vector<std::unique_ptr<FunctionPostHook>>post_hooks_#
at::SmallVector<InputMetadata,2>input_metadata_#
structundefined_input#
On this page