Rate this Page

Windows FAQ#

Created On: Apr 23, 2018 | Last Updated On: May 20, 2025

Building from source#

Include optional components#

There are two supported components for Windows PyTorch:MKL and MAGMA. Here are the steps to build with them.

REM Make sure you have 7z and curl installed.REM Download MKL filescurl https://s3.amazonaws.com/ossci-windows/mkl_2020.2.254.7z -k -O7z x -aoa mkl_2020.2.254.7z -omklREM Download MAGMA filesREM version available:REM 2.5.4 (CUDA 10.1 10.2 11.0 11.1) x (Debug Release)REM 2.5.3 (CUDA 10.1 10.2 11.0) x (Debug Release)REM 2.5.2 (CUDA 9.2 10.0 10.1 10.2) x (Debug Release)REM 2.5.1 (CUDA 9.2 10.0 10.1 10.2) x (Debug Release)set"CUDA_PREFIX=cuda102"set"CONFIG=release"set"HOST=https://s3.amazonaws.com/ossci-windows"curl -k"%HOST%/magma_2.5.4_%CUDA_PREFIX%_%CONFIG%.7z" -o magma.7z7z x -aoa magma.7z -omagmaREM Setting essential environment variablesset"CMAKE_INCLUDE_PATH=%cd%\mkl\include"set"LIB=%cd%\mkl\lib;%LIB%"set"MAGMA_HOME=%cd%\magma"

Speeding CUDA build for Windows#

Visual Studio doesn’t support parallel custom task currently.As an alternative, we can useNinja to parallelize CUDAbuild tasks. It can be used by typing only a few lines of code.

REM Let's install ninja first.pip install ninjaREM Set it as the cmake generatorsetCMAKE_GENERATOR=Ninja

One key install script#

You can take a look atthis set of scripts.It will lead the way for you.

Extension#

CFFI Extension#

The support for CFFI Extension is very experimental. You must specifyadditionallibraries inExtension object to make it build onWindows.

ffi=create_extension('_ext.my_lib',headers=headers,sources=sources,define_macros=defines,relative_to=__file__,with_cuda=with_cuda,extra_compile_args=["-std=c99"],libraries=['ATen','_C']# Append cuda libraries when necessary, like cudart)

Cpp Extension#

This type of extension has better support compared withthe previous one. However, it still needs some manualconfiguration. First, you should open thex86_x64 Cross Tools Command Prompt for VS 2017.And then, you can start your compiling process.

Installation#

Package not found in win-32 channel.#

Solving environment: failedPackagesNotFoundError: The following packages are not available from current channels:- pytorchCurrent channels:- https://repo.continuum.io/pkgs/main/win-32- https://repo.continuum.io/pkgs/main/noarch- https://repo.continuum.io/pkgs/free/win-32- https://repo.continuum.io/pkgs/free/noarch- https://repo.continuum.io/pkgs/r/win-32- https://repo.continuum.io/pkgs/r/noarch- https://repo.continuum.io/pkgs/pro/win-32- https://repo.continuum.io/pkgs/pro/noarch- https://repo.continuum.io/pkgs/msys2/win-32- https://repo.continuum.io/pkgs/msys2/noarch

PyTorch doesn’t work on 32-bit system. Please use Windows andPython 64-bit version.

Import error#

fromtorch._Cimport*ImportError:DLLloadfailed:Thespecifiedmodulecouldnotbefound.

The problem is caused by the missing of the essential files.For the wheels package, since we didn’t pack some libraries and VS2017redistributable files in, please make sure you install them manually.TheVS 2017 redistributable installer can be downloaded.And you should also pay attention to your installation of Numpy. Make sure ituses MKL instead of OpenBLAS. You may type in the following command.

pip install numpy mkl intel-openmp mkl_fft

Usage (multiprocessing)#

Multiprocessing error without if-clause protection#

RuntimeError:Anattempthasbeenmadetostartanewprocessbeforethecurrentprocesshasfinisheditsbootstrappingphase.Thisprobablymeansthatyouarenotusingforktostartyourchildprocessesandyouhaveforgottentousetheproperidiominthemainmodule:if__name__=='__main__':freeze_support()...The"freeze_support()"linecanbeomittediftheprogramisnotgoingtobefrozentoproduceanexecutable.

The implementation ofmultiprocessing is different on Windows, whichusesspawn instead offork. So we have to wrap the code with anif-clause to protect the code from executing multiple times. Refactoryour code into the following structure.

importtorchdefmain()fori,datainenumerate(dataloader):# do something hereif__name__=='__main__':main()

Multiprocessing error “Broken pipe”#

ForkingPickler(file,protocol).dump(obj)BrokenPipeError:[Errno32]Brokenpipe

This issue happens when the child process ends before the parent processfinishes sending data. There may be something wrong with your code. Youcan debug your code by reducing thenum_worker ofDataLoader to zero and see if the issue persists.

Multiprocessing error “driver shut down”#

Couldn’t open shared file mapping: <torch_14808_1591070686>, error code: <1455> at torch\lib\TH\THAllocator.c:154[windows] driver shut down

Please update your graphics driver. If this persists, this may be that yourgraphics card is too old or the calculation is too heavy for your card. Pleaseupdate the TDR settings according to thispost.

CUDA IPC operations#

THCudaCheckFAILfile=torch\csrc\generic\StorageSharing.cppline=252error=63:OScallfailedoroperationnotsupportedonthisOS

They are not supported on Windows. Something like doing multiprocessing on CUDAtensors cannot succeed, there are two alternatives for this.

1. Don’t usemultiprocessing. Set thenum_worker ofDataLoader to zero.

2. Share CPU tensors instead. Make sure your customDataSet returns CPU tensors.