Translator: belonHan

torch.utils.cpp_extension.CppExtension(name, sources, *args, **kwargs)Copy the code

Create a C++ setuptools.extension.

Easily create a setuptools.Extension method with minimal (but usually sufficient) parameters to build C++ extensions.

All parameters are forwarded to the setupTools.extension constructor.

example

>>> from setuptools import setup
>>> from torch.utils.cpp_extension import BuildExtension, CppExtension
>>> setup(
name='extension',
ext_modules=[
CppExtension(
name='extension',
sources=['extension.cpp'],
extra_compile_args=['-g'])),
],
cmdclass={
'build_ext': BuildExtension
})
Copy the code
torch.utils.cpp_extension.CUDAExtension(name, sources, *args, **kwargs)Copy the code

Create a setuptools.extension for CUDA/C++.

Create a convenient way for setupTools.extension to build CUDA/C ++ extensions with minimal (but usually sufficient) parameters. This includes the CUDA path, library path, and runtime. All parameters are forwarded to the setupTools.extension constructor.

All parameters are forwarded to the setupTools.extension constructor.

example

>>> from setuptools import setup
>>> from torch.utils.cpp_extension import BuildExtension, CUDAExtension
>>> setup(
name='cuda_extension',
ext_modules=[
CUDAExtension(
name='cuda_extension',
sources=['extension.cpp', 'extension_kernel.cu'],
extra_compile_args={'cxx': ['-g'],
'nvcc': ['-O2']})
],
cmdclass={
'build_ext': BuildExtension
})
Copy the code
torch.utils.cpp_extension.BuildExtension(*args, **kwargs)Copy the code

Custom SetupTools build extension.

The setuptools.build_ext subclass is responsible for passing the minimum compiler parameters required (such as -std= C ++11) as well as mixed C ++/CUDA compilation (and general support for CUDA files).

When BuildExtension is used, a dictionary for extra_compile_args (not plain lists) is provided to the compiler via language (CXX or CUDA) mapping to parameter lists. This provides different parameters for C ++ and CUDA compilers during mixed compilation.

torch.utils.cpp_extension.load(name, sources, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True)Copy the code

Just-in-time loading (JIT)PyTorch C ++ extension.

To load the extension, a Ninja build file is created, which is used to compile the specified source into a dynamic library. The library is then loaded as a module into the current Python process and returned from the function ready for use.

By default, the directories created by the build file and the resulting repository are. tmp> /torch_extensions/< name> , including & lt; tmp> Are temporary folders on the current platform and < name> Is the extension name. This location can be covered in two ways. First, if TORCH_EXTENSIONS_DIR sets the environment variable, it replaces < tmp> /torch_extensions and compile all extensions into subfolders in this directory. Second, if the build_directory function sets parameters, it will also overwrite the entire path, that is, the library will be compiled directly into that folder.

To compile the source file, use the default system compiler (c++), which can be overridden by setting the CXX environment variable. Pass additional arguments to the compile process, which extra_cFLAGS or EXTRA_LDFLAGS can provide. For example, to optimize to compile your extension, you can either pass extra_cflags=[‘-O3’] or use extra_Cflags to pass further inclusion directories.

CUDA support for mixed compilation is available. Simply pass the CUDA source file (.cu or.cuh) along with the other sources. These files will be detected and compiled using NVCC instead of a C ++ compiler. This includes passing the CUDA lib64 directory as a library directory and cudart linking. You can pass additional parameters to NVCC extra_CUDA_CFLAGS, just as extra_Cflags uses C ++. A variety of primitive methods are used to find CUDA installation directories, which generally works fine. If not, it is best to set the CUDA_HOME environment variable.

Parameters:

  • Name – Extension to build. This has to be the same aspybind11The module name is the same!
  • sources – C++A list of relative or absolute paths to source files.
  • Extra_cflags – An optional list of compiler parameters to forward to the build.
  • Extra_cuda_cflags – Optional list of compiler tags in buildCUDASource time forward tonvcc.
  • Extra_ldflags – An optional list of linker parameters to forward to the build.
  • Extra_include_paths – An optional list of directories that are forwarded to the build.
  • Build_directory – Optional path as build area.
  • Verbose – IfTrueTo open a detailed record of the loading steps.
  • With_cuda – Determines whether the build contains CUDA headers/libraries. The default valueNone, automatic passsourcesDirectory exists.cu.cuhFile confirmation.TrueMandatory inclusion.
  • Is_python_module – Default valueTrue: Imported in Python module mode.False: Normal dynamic library way to load into the program.
Returns: is_python_module= =True, the loadPyTorchExtension asPythonThe module. Ifis_python_module= =FalseNo return (the side effect is that the shared library is loaded into the process).

example

>>> from torch.utils.cpp_extension import load
>>> module = load(
name='extension',
sources=['extension.cpp', 'extension_kernel.cu'],
extra_cflags=['-O2'],
verbose=True)
Copy the code
torch.utils.cpp_extension.load_inline(name, cpp_sources, cuda_sources=None, functions=None, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True)Copy the code

Compile and load the PyTorch C++ extension at runtime

This function is much like load(), but its source file is a string rather than a filename. After saving these strings to the build directory, load_inline() is equivalent to load().

Example: The tests

The source code may omit two necessary parts of a non-inline c++ extension: the required header file, and the (pybind11) binding code. More precisely, the string passed to cpp_sources is first concatenated into a separate.cpp file. Then prefix the file with #include & lt; torch/extension.h>

In addition, if arguments to functions are provided, the specified function automatically generates bindings. Functions can be a list of function names or a dictionary of {function names: docstrings}. If given a list, the name of each function is used as its docstring.

The code in CUDa_sources is sequenced to separate.cu files, append torch/types.h, cuda.h and cuda_runtime.h headers.. CPP and.cu files are compiled separately and eventually linked into a library. Note that the functions in cuda_sources are not bound themselves. In order to bind CUDA cores, you must create a new C++ function to call it or declare or define it in cpp_sources (and include it in functions).

Load () looks at the arguments ignored below.

Parameters:

  • Cpp_sources – string, or list of strings, including C++ source code
  • Cuda_sources – String, or list of strings, containing CUDA source code
  • Functions – Lists of function names are used to generate function bindings. If it’s a dictionary,key= function name,value= document description.
  • with_cuda– Determine whether to add CUDA headers or libraries. The default valueNone(default), depending on the parametercuda_sources . TrueForce the inclusion of CUDA headers/libraries.

example

>>> from torch.utils.cpp_extension import load_inline
>>> source = '''
at::Tensor sin_add(at::Tensor x, at::Tensor y) {
return x.sin() + y.sin();
}
'''
>>> module = load_inline(name='inline_extension',
cpp_sources=[source],
functions=['sin_add'])
Copy the code
torch.utils.cpp_extension.include_paths(cuda=False)Copy the code

Gets the path needed to build C++ or CUDA extensions.

  • Parameters:cuda– If True, it containsCUDASpecific include path.
  • Returns: a list of path strings.
torch.utils.cpp_extension.check_compiler_abi_compatibility(compiler)Copy the code

Verify that a given compiler is compatible with the PyTorch ABI.

  • Parameters: the compiler (str) – To check the executable compiler filename (e.gg++), must be inshellExecutable in process.
  • Returns: if the compiler (may) vsPyTorchIf the ABI is incompatible, isFalseOtherwise returnTrue.
torch.utils.cpp_extension.verify_ninja_availability()Copy the code

Return True if running on Ninja.