Complex Numbers#
Created On: Jun 16, 2025 | Last Updated On: Jun 16, 2025
Complex numbers are numbers that can be expressed in the form, where a and b are real numbers,andj is called the imaginary unit, which satisfies the equation. Complex numbers frequently occur in mathematics andengineering, especially in topics like signal processing. Traditionally many users and libraries (e.g., TorchAudio) havehandled complex numbers by representing the data in float tensors with shape where the lastdimension contains the real and imaginary values.
Tensors of complex dtypes provide a more natural user experience while working with complex numbers. Operations oncomplex tensors (e.g.,torch.mv(),torch.matmul()) are likely to be faster and more memory efficientthan operations on float tensors mimicking them. Operations involving complex numbers in PyTorch are optimizedto use vectorized assembly instructions and specialized kernels (e.g. LAPACK, cuBlas).
Note
Spectral operations in thetorch.fft module supportnative complex tensors.
Warning
Complex tensors is a beta feature and subject to change.
Creating Complex Tensors#
We support two complex dtypes:torch.cfloat andtorch.cdouble
>>>x=torch.randn(2,2,dtype=torch.cfloat)>>>xtensor([[-0.4621-0.0303j, -0.2438-0.5874j], [ 0.7706+0.1421j, 1.2110+0.1918j]])
Note
The default dtype for complex tensors is determined by the default floating point dtype.If the default floating point dtype istorch.float64 then complex numbers are inferred tohave a dtype oftorch.complex128, otherwise they are assumed to have a dtype oftorch.complex64.
All factory functions apart fromtorch.linspace(),torch.logspace(), andtorch.arange() aresupported for complex tensors.
Transition from the old representation#
Users who currently worked around the lack of complex tensors with real tensors of shapecan easily to switch using the complex tensors in their code usingtorch.view_as_complex()andtorch.view_as_real(). Note that these functions don’t perform any copy and return aview of the input tensor.
>>>x=torch.randn(3,2)>>>xtensor([[ 0.6125, -0.1681], [-0.3773, 1.3487], [-0.0861, -0.7981]])>>>y=torch.view_as_complex(x)>>>ytensor([ 0.6125-0.1681j, -0.3773+1.3487j, -0.0861-0.7981j])>>>torch.view_as_real(y)tensor([[ 0.6125, -0.1681], [-0.3773, 1.3487], [-0.0861, -0.7981]])
Accessing real and imag#
The real and imaginary values of a complex tensor can be accessed using thereal andimag.
Note
Accessingreal andimag attributes doesn’t allocate any memory, and in-place updates on thereal andimag tensors will update the original complex tensor. Also, thereturnedreal andimag tensors are not contiguous.
>>>y.realtensor([ 0.6125, -0.3773, -0.0861])>>>y.imagtensor([-0.1681, 1.3487, -0.7981])>>>y.real.mul_(2)tensor([ 1.2250, -0.7546, -0.1722])>>>ytensor([ 1.2250-0.1681j, -0.7546+1.3487j, -0.1722-0.7981j])>>>y.real.stride()(2,)
Angle and abs#
The angle and absolute values of a complex tensor can be computed usingtorch.angle() andtorch.abs().
>>>x1=torch.tensor([3j,4+4j])>>>x1.abs()tensor([3.0000, 5.6569])>>>x1.angle()tensor([1.5708, 0.7854])
Linear Algebra#
Many linear algebra operations, liketorch.matmul(),torch.linalg.svd(),torch.linalg.solve() etc., support complex numbers.If you’d like to request an operation we don’t currently support, pleasesearchif an issue has already been filed and if not,file one.
Serialization#
Complex tensors can be serialized, allowing data to be saved as complex values.
>>>torch.save(y,'complex_tensor.pt')>>>torch.load('complex_tensor.pt')tensor([ 0.6125-0.1681j, -0.3773+1.3487j, -0.0861-0.7981j])
Autograd#
PyTorch supports autograd for complex tensors. The gradient computed is the Conjugate Wirtinger derivative,the negative of which is precisely the direction of steepest descent used in Gradient Descent algorithm. Thus,all the existing optimizers can be implemented to work out of the box with complex parameters. For more details,check out the noteAutograd for Complex Numbers.
Optimizers#
Semantically, we define stepping through a PyTorch optimizer with complex parameters as being equivalent to steppingthrough the same optimizer on thetorch.view_as_real() equivalent of the complex params. More concretely:
>>>params=[torch.rand(2,3,dtype=torch.complex64)for_inrange(5)]>>>real_params=[torch.view_as_real(p)forpinparams]>>>complex_optim=torch.optim.AdamW(params)>>>real_optim=torch.optim.AdamW(real_params)
real_optim andcomplex_optim will compute the same updates on the parameters, though there may be slight numericaldiscrepancies between the two optimizers, similar to numerical discrepancies between foreach vs forloop optimizersand capturable vs default optimizers. For more details, seenumbercial accuracy.
Specifically, while you can think of our optimizer’s handling of complex tensors as the same as optimizing over theirp.real andp.imag pieces separately, the implementation details are not precisely that. Note that thetorch.view_as_real() equivalent will convert a complex tensor to a real tensor with shape,whereas splitting a complex tensor into two tensors is 2 tensors of size. This distinction has no impact onpointwise optimizers (like AdamW) but will cause slight discrepancy in optimizers that do global reductions (like LBFGS).We currently do not have optimizers that do per-Tensor reductions and thus do not yet define this behavior. Open an issueif you have a use case that requires precisely defining this behavior.
We do not fully support the following subsystems:
Quantization
JIT
Sparse Tensors
Distributed
If any of these would help your use case, pleasesearchif an issue has already been filed and if not,file one.