Machine Learning Glossary: TensorFlow Stay organized with collections Save and categorize content based on your preferences.
Page Summary
This glossary page provides definitions for TensorFlow-related terms.
Many terms link to the broader machine learning glossary for further information.
Definitions cover fundamental concepts, TensorFlow APIs, and Google Cloud TPU details.
Users can understand key aspects of TensorFlow, including graphs, tensors, and execution environments.
The glossary helps navigate terminology for model training, deployment, and hardware acceleration using TPUs.
This page contains TensorFlow glossary terms. For all glossary terms,click here.
C
Cloud TPU
A specialized hardware accelerator designed to speed up machinelearning workloads on Google Cloud.
D
Dataset API (tf.data)
A high-levelTensorFlow API for reading data andtransforming it into a form that a machine learning algorithm requires.Atf.data.Dataset object represents a sequence of elements, in whicheach element contains one or moreTensors. Atf.data.Iteratorobject provides access to the elements of aDataset.
device
An overloaded term with the following two possible definitions:
- A category of hardware that can run a TensorFlow session, includingCPUs, GPUs, andTPUs.
- When training an ML model onaccelerator chips(GPUs or TPUs), the part of the system that actually manipulatestensors andembeddings.The device runs on accelerator chips. In contrast, thehosttypically runs on a CPU.
E
eager execution
A TensorFlow programming environment in whichoperationsrun immediately. In contrast, operations called ingraph execution don't run until they are explicitlyevaluated. Eager execution is animperative interface, muchlike the code in most programming languages. Eager execution programs aregenerally far easier to debug than graph execution programs.
Estimator
A deprecated TensorFlow API. Usetf.keras insteadof Estimators.
F
feature engineering
A process that involves the following steps:
- Determining whichfeatures might be usefulin training a model.
- Converting raw data from the dataset into efficient versions ofthose features.
For example, you might determine thattemperature might be a usefulfeature. Then, you might experiment withbucketingto optimize what the model can learn from differenttemperature ranges.
Feature engineering is sometimes calledfeature extraction orfeaturization.
Click the icon for additional notes about TensorFlow.
In TensorFlow, feature engineering often means converting raw log fileentries totf.Example protocol buffers.See alsotf.Transform.
SeeNumerical data: How a model ingests data using featurevectorsin Machine Learning Crash Course for more information.
feature spec
Describes the information required to extractfeatures datafrom thetf.Example protocol buffer. Because thetf.Example protocol buffer is just a container for data, you must specifythe following:
- The data to extract (that is, the keys for the features)
- The data type (for example, float or int)
- The length (fixed or variable)
G
graph
In TensorFlow, a computation specification. Nodes in the graphrepresent operations. Edges are directed and represent passing the resultof an operation (aTensor) as anoperand to another operation. UseTensorBoard to visualize a graph.
graph execution
A TensorFlow programming environment in which the program first constructsagraph and then executes all or part of that graph. Graphexecution is the default execution mode in TensorFlow 1.x.
Contrast witheager execution.
H
host
When training an ML model onaccelerator chips(GPUs orTPUs), the part of the systemthat controls both of the following:
- The overall flow of the code.
- The extraction and transformation of the input pipeline.
The host typically runs on a CPU, not on an accelerator chip; thedevice manipulatestensors on theaccelerator chips.
L
Layers API (tf.layers)
A TensorFlow API for constructing adeep neural networkas a composition of layers. The Layers API lets you build differenttypes oflayers, such as:
tf.layers.Densefor afully-connected layer.tf.layers.Conv2Dfor a convolutional layer.
The Layers API follows theKeras layers API conventions.That is, aside from a different prefix, all functions in the Layers APIhave the same names and signatures as their counterparts in the Keraslayers API.
M
mesh
In ML parallel programming, a term associated with assigning the data andmodel to TPU chips, and defining how these values will be sharded or replicated.
Mesh is an overloaded term that can mean either of the following:
- A physical layout of TPU chips.
- An abstract logical construct for mapping the data and model to the TPUchips.
In either case, a mesh is specified as ashape.
metric
A statistic that you care about.
Anobjective is a metric that a machine learning systemtries to optimize.
N
node (TensorFlow graph)
An operation in a TensorFlowgraph.
O
operation (op)
In TensorFlow, any procedure that creates,manipulates, or destroys aTensor. Forexample, a matrix multiply is an operation that takes two Tensors asinput and generates one Tensor as output.
P
Parameter Server (PS)
A job that keeps track of a model'sparameters in adistributed setting.
Q
queue
A TensorFlowOperation that implements a queue datastructure. Typically used in I/O.
R
rank (Tensor)
The number of dimensions in aTensor. For example,a scalar has rank 0, a vector has rank 1, and a matrix has rank 2.
Not to be confused withrank (ordinality).
root directory
The directory you specify for hosting subdirectories of the TensorFlowcheckpoint and events files of multiple models.
S
SavedModel
The recommended format for saving and recovering TensorFlow models. SavedModelis a language-neutral, recoverable serialization format, which enableshigher-level systems and tools to produce, consume, and transform TensorFlowmodels.
See theSaving and Restoring sectionof the TensorFlow Programmer's Guide for complete details.
Saver
ATensorFlow objectresponsible for saving model checkpoints.
shard
A logical division of thetraining set or themodel. Typically, some process creates shards by dividingtheexamples orparameters into (usually)equal-sized chunks. Each shard is then assigned to a different machine.
Sharding a model is calledmodel parallelism;sharding data is calleddata parallelism.
summary
In TensorFlow, a value or set of values calculated at a particularstep, usually used for tracking model metrics during training.
T
Tensor
The primary data structure in TensorFlow programs. Tensors are N-dimensional(where N could be very large) data structures, most commonly scalars, vectors,or matrixes. The elements of a Tensor can hold integer, floating-point, orstring values.
TensorBoard
The dashboard that displays the summaries saved during the execution of one ormore TensorFlow programs.
TensorFlow
A large-scale, distributed, machine learning platform. The term also refers tothe base API layer in the TensorFlow stack, which supports general computationon dataflow graphs.
Although TensorFlow is primarily used for machine learning, you may also useTensorFlow for non-ML tasks that require numerical computation usingdataflow graphs.
TensorFlow Playground
A program that visualizes how differenthyperparameters influence model(primarily neural network) training.Go tohttp://playground.tensorflow.orgto experiment with TensorFlow Playground.
TensorFlow Serving
A platform to deploy trained models in production.
Tensor Processing Unit (TPU)
An application-specific integrated circuit (ASIC) that optimizes theperformance of machine learning workloads. These ASICs are deployed asmultipleTPU chips on aTPU device.
Tensor rank
Seerank (Tensor).
Tensor shape
The number of elements aTensor contains in various dimensions.For example, a[5, 10] Tensor has a shape of 5 in one dimension and 10in another.
Tensor size
The total number of scalars aTensor contains. For example, a[5, 10] Tensor has a size of 50.
tf.Example
A standardprotocol bufferfor describing input data for machine learning model training or inference.
tf.keras
An implementation ofKeras integrated intoTensorFlow.
TPU
Abbreviation forTensor Processing Unit.
TPU chip
A programmable linear algebra accelerator with on-chip high bandwidth memorythat is optimized for machine learning workloads.Multiple TPU chips are deployed on aTPU device.
TPU device
A printed circuit board (PCB) with multipleTPU chips,high bandwidth network interfaces, and system cooling hardware.
TPU node
A TPU resource on Google Cloud with a specificTPU type. The TPU node connects to yourVPC Network from apeer VPC network.TPU nodes are a resource defined in theCloud TPU API.
TPU Pod
A specific configuration ofTPU devices in a Googledata center. All of the devices in a TPU Pod are connected to one anotherover a dedicated high-speed network. A TPU Pod is the largest configuration ofTPU devices available for a specific TPU version.
TPU resource
A TPU entity on Google Cloud that you create, manage, or consume. Forexample,TPU nodes andTPU types areTPU resources.
TPU slice
A TPU slice is a fractional portion of theTPU devices inaTPU Pod. All of the devices in a TPU slice are connectedto one another over a dedicated high-speed network.
TPU type
A configuration of one or moreTPU devices with a specificTPU hardware version. You select a TPU type when you createaTPU node on Google Cloud. For example, av2-8TPU type is a single TPU v2 device with 8 cores. Av3-2048 TPU type has 256networked TPU v3 devices and a total of 2048 cores. TPU types are a resourcedefined in theCloud TPU API.
TPU worker
A process that runs on a host machine and executes machine learning programsonTPU devices.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-16 UTC.