![]() | |
Original author(s) |
|
---|---|
Developer(s) | Meta AI |
Initial release | September 2016; 8 years ago (2016-09)[1] |
Stable release | |
Repository | github |
Written in | |
Operating system | |
Platform | IA-32,x86-64,ARM64 |
Available in | English |
Type | Library formachine learning anddeep learning |
License | BSD-3[3] |
Website | pytorch |
Part of a series on |
Machine learning anddata mining |
---|
Learning with humans |
Model diagnostics |
PyTorch is amachine learninglibrary based on theTorch library,[4][5][6] used for applications such ascomputer vision andnatural language processing,[7] originally developed byMeta AI and now part of theLinux Foundation umbrella.[8][9][10][11] It is one of the most populardeep learning frameworks, alongside others such asTensorFlow,[12] offeringfree and open-source software released under themodified BSD license. Although thePython interface is more polished and the primary focus of development, PyTorch also has aC++ interface.[13]
A number of pieces ofdeep learning software are built on top of PyTorch, includingTesla Autopilot,[14]Uber's Pyro,[15]Hugging Face's Transformers,[16][17] and Catalyst.[18][19]
PyTorch provides two high-level features:[20]
Meta (formerly known as Facebook) operates both PyTorch and Convolutional Architecture for Fast Feature Embedding (Caffe2), but models defined by the two frameworks were mutually incompatible. The Open Neural Network Exchange (ONNX) project was created by Meta andMicrosoft in September 2017 for converting models between frameworks. Caffe2 was merged into PyTorch at the end of March 2018.[21] In September 2022, Meta announced that PyTorch would be governed by the independent PyTorch Foundation, a newly created subsidiary of theLinux Foundation.[22]
PyTorch 2.0 was released on 15 March 2023, introducingTorchDynamo, a Python-levelcompiler that makes code run up to 2x faster, along with significant improvements in training and inference performance across majorcloud platforms.[23][24]
PyTorch defines a class called Tensor (torch.Tensor
) to store and operate on homogeneous multidimensional rectangular arrays of numbers. PyTorch Tensors are similar toNumPy Arrays, but can also be operated on aCUDA-capableNVIDIA GPU. PyTorch has also been developing support for other GPU platforms, for example, AMD'sROCm[25] and Apple'sMetal Framework.[26]
PyTorch supports various sub-types of Tensors.[27]
Note that the term "tensor" here does not carry the same meaning as tensor in mathematics or physics. The meaning of the word in machine learning is only superficially related to its original meaning as a certain kind of object inlinear algebra. Tensors in PyTorch are simply multi-dimensional arrays.
PyTorch defines a module called nn (torch.nn
) to describe neural networks and to support training. This module offers a comprehensive collection of building blocks for neural networks, including various layers and activation functions, enabling the construction of complex models. Networks are built by inheriting from thetorch.nn
module and defining the sequence of operations in theforward()
function.
The following program shows the low-level functionality of the library with a simple example.
importtorchdtype=torch.floatdevice=torch.device("cpu")# Execute all calculations on the CPU# device = torch.device("cuda:0") # Executes all calculations on the GPU# Create a tensor and fill it with random numbersa=torch.randn(2,3,device=device,dtype=dtype)print(a)# Output: tensor([[-1.1884, 0.8498, -1.7129],# [-0.8816, 0.1944, 0.5847]])b=torch.randn(2,3,device=device,dtype=dtype)print(b)# Output: tensor([[ 0.7178, -0.8453, -1.3403],# [ 1.3262, 1.1512, -1.7070]])print(a*b)# Output: tensor([[-0.8530, -0.7183, 2.58],# [-1.1692, 0.2238, -0.9981]])print(a.sum())# Output: tensor(-2.1540)print(a[1,2])# Output of the element in the third column of the second row (zero based)# Output: tensor(0.5847)print(a.max())# Output: tensor(0.8498)
The following code-block defines a neural network with linear layers using thenn
module.
fromtorchimportnn# Import the nn sub-module from PyTorchclassNeuralNetwork(nn.Module):# Neural networks are defined as classesdef__init__(self):# Layers and variables are defined in the __init__ methodsuper().__init__()# Must be in every network.self.flatten=nn.Flatten()# Construct a flattening layer.self.linear_relu_stack=nn.Sequential(# Construct a stack of layers.nn.Linear(28*28,512),# Linear Layers have an input and output shapenn.ReLU(),# ReLU is one of many activation functions provided by nnnn.Linear(512,512),nn.ReLU(),nn.Linear(512,10),)defforward(self,x):# This function defines the forward pass.x=self.flatten(x)logits=self.linear_relu_stack(x)returnlogits
FAIR is accustomed to working with PyTorch – a deep learning framework optimized for achieving state of the art results in research, regardless of resource constraints. Unfortunately in the real world, most of us are limited by the computational capabilities of our smartphones and computers.