在PyTorch中如果查找python函數的定義,十有八九會跳轉到torch/_C/_VariableFunctions.pyi
這個檔案。但是如果去PyTorch的github repo上尋找這個檔案,只能找到一個跟它名字類似的torch/_C/_VariableFunctions.pyi.in
,卻找不到torch/_C/_VariableFunctions.pyi
這個檔案本身。
如果打開torch/_C/_VariableFunctions.pyi
去看:
# @generated from torch/_C/_VariableFunctions.pyi.in
才發現原來首行就說了:它是在編譯時才由torch/_C/_VariableFunctions.pyi.in
動態生成的。
本篇就是要探討PyTorch中pyi檔案的生成機制。pyi檔案的生成過程大致可分為以下兩步:
由py生成pyi.in
由pyi.in生成pyi
但在這之前,先來看一下pyi檔在Python中的作用為何。
首先來看一下pyi這個檔案類型的名稱由來,根據What does “i” represent in Python .pyi extension?:
The i in .pyi stands for ‘interface’.The .pyi extension was first mentioned in this GitHub issue thread where JukkaL says:I'd probably prefer an extension with just a single dot. It also needs to be something that is not in use (it should not be used by cython, etc.). .pys seems to be used in Windows (or was). Maybe .pyi, where i stands for an interface definition?
可以知道,pyi中的i代表的是interface。
pyi implements "stub" file (definition from Martin Fowler)Stubs: provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.
而它代表的涵意則是stub(樁/存根),詳見樁 (計算機):
樁[1](Stub / Method Stub )是指用來替換一部分功能的程序段。樁程序可以用來模擬已有程序的行為(比如一個遠端機器的過程)或是對將要開發的代碼的一種臨時替代。因此,打樁技術在程序移植、分布式計算、通用軟體開發和測試中用處很大。
正如pyi文件是干嘛的?(一文读懂Python的存根文件和类型检查)中所說,而pyi檔的作用只是在IDE中給type hint的,並不是必須的。
在PyTorch中也是一樣,torch/_C/_VariableFunctions.pyi
僅用於類型提示。Python函數與C++函數的關聯實際上是由torch/csrc/autograd/generated/python_torch_functions_i.cpp
所指定,而該檔案也是在編譯時自動生成的,詳見PyTorch中的python_torch_functions_i.cpp檔案生成機制。
PyTorch源碼中有以下.pyi.in
檔:
torch/_C/__init__.pyi.in
torch/_C/_nn.pyi.in
torch/_C/return_types.pyi.in
torch/_C/_VariableFunctions.pyi.in
torch/nn/functional.pyi.in
torch/utils/data/datapipes/datapipe.pyi.in
根據torch/nn/functional.pyi.in
中的注釋:
# These stubs were generated by running stubgen (`stubgen --parse-only functional.py`), followed by manual cleaning.
functional.pyi.in
是用mypy
的stubgen
工具由functional.py
生成後手動編輯而成的。
試著自己對torch/nn/functional.py
跑跑看stubgen
,首先把functional.py
這個檔案複製到一個合適的地方,然後下:
stubgen functional.py
如果出現以下跟import相關的錯誤,先手動把對應的行數注釋掉就好:
Critical error during semantic analysis: functional.py:23: error: No parent module -- cannot perform relative import
functional.py:24: error: No parent module -- cannot perform relative import
先只關注以下這一段:
def fractional_max_pool2d_with_indices(input: Tensor, kernel_size: BroadcastingList2[int],output_size: Optional[BroadcastingList2[int]] = None,output_ratio: Optional[BroadcastingList2[float]] = None,return_indices: bool = False,_random_samples: Optional[Tensor] = None
) -> Tuple[Tensor, Tensor]:# ...fractional_max_pool2d = boolean_dispatch(arg_name="return_indices",arg_index=4,default=False,if_true=fractional_max_pool2d_with_indices,if_false=_fractional_max_pool2d,module_name=__name__,func_name="fractional_max_pool2d",
)
生成的functional.pyi
裡對應的內容:
# ...
def fractional_max_pool2d_with_indices(input: Tensor, kernel_size: BroadcastingList2[int], output_size: Optional[BroadcastingList2[int]] = ..., output_ratio: Optional[BroadcastingList2[float]] = ..., return_indices: bool = ..., _random_samples: Optional[Tensor] = ...) -> Tuple[Tensor, Tensor]: ...fractional_max_pool2d: Incomplete
# ...
fractional_max_pool2d_with_indices
這個函數的簽名與原本的幾乎一致,而fractional_max_pool2d
則因為無法推斷被標注為Incomplete
。
照理說.pyi.in
檔是由.py
檔生成的,但是torch/_C
目錄下的.pyi.in
檔都沒有對應的.py
檔,推測是由多個.py
檔合併到同一個.pyi.in
檔而來的。
一般來說.pyi
檔是由stubgen
生成的,但在PyTorch中則是先用stubgen
生成並手動編輯後得到pyi.in
檔,然後再利用Python腳本由.pyi.in
檔生成的。
新增一個名為torch_python_stubs
的custom target,依賴於如下的pyi檔。(關於add_custom_target
和接下來會看到的add_custom_command
詳見cmake的add_custom_command及add_custom_target。)
add_custom_target(torch_python_stubs DEPENDS"${TORCH_SRC_DIR}/_C/__init__.pyi""${TORCH_SRC_DIR}/_C/_VariableFunctions.pyi""${TORCH_SRC_DIR}/nn/functional.pyi""${TORCH_SRC_DIR}/utils/data/datapipes/datapipe.pyi"
)
查看如下add_custom_command
的OUTPUT
參數,可以知道這個custom command正是用於生成torch_python_stubs
所依賴的前三個pyi檔。至於剩下的datapipe.pyi
是如何生成的,詳見datapipe.pyi
章節。
file(GLOB_RECURSE torchgen_python "${PROJECT_SOURCE_DIR}/torchgen/*.py")
file(GLOB_RECURSE autograd_python "${TOOLS_PATH}/autograd/*.py")
file(GLOB_RECURSE pyi_python "${TOOLS_PATH}/pyi/*.py")
add_custom_command(OUTPUT"${TORCH_SRC_DIR}/_C/__init__.pyi""${TORCH_SRC_DIR}/_C/_VariableFunctions.pyi""${TORCH_SRC_DIR}/nn/functional.pyi"COMMAND"${PYTHON_EXECUTABLE}" -_pyi--native-functions-path "aten/src/ATen/native/native_functions.yaml"--tags-path "aten/src/ATen/native/tags.yaml"--deprecated-functions-path "tools/autograd/deprecated.yaml"DEPENDS"${TORCH_SRC_DIR}/_C/__init__.pyi.in""${TORCH_SRC_DIR}/_C/_VariableFunctions.pyi.in""${TORCH_SRC_DIR}/nn/functional.pyi.in""${TORCH_ROOT}/aten/src/ATen/native/native_functions.yaml""${TORCH_ROOT}/aten/src/ATen/native/tags.yaml""${TORCH_ROOT}/tools/autograd/deprecated.yaml"${pyi_python}${autograd_python}${torchgen_python}WORKING_DIRECTORY"${TORCH_ROOT}"
)
這一段的入口是add_custom_command
中的COMMAND
,它透過"${PYTHON_EXECUTABLE}" -_pyi
調用tools/pyi/gen_pyi.py
,輸入則是DEPENDS
區塊中寫的_C/__init__.pyi.in
, _C/_VariableFunctions.pyi.in
及nn/functional.pyi.in
,程序執行完後會生成OUTPUT
區塊中寫的三個pyi
檔。
torch/_C/_nn.pyi
和torch/_C/return_types.pyi
也是由tools/pyi/gen_pyi.py
生成的,為什麼沒寫在add_custom_target
和add_custom_command
的DEPENDS
和OUTPUT
裡?
新增一個名為torch_python
的shared library,運行過後會生成build/lib/libtorch_python.so
。
add_library(torch_python SHARED ${TORCH_PYTHON_SRCS})
接著宣告torch_python
依賴於torch_python_stubs
這個custom target。
add_dependencies(torch_python torch_python_stubs)
在非MacOS的系統上都會建構一個名為nnapi_backend
的library,它的依賴中就有torch_python
。
# Skip building this library under MacOS, since it is currently failing to build on Mac
# Github issue #61930
if(NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")# Add Android Nnapi delegate libraryadd_library(nnapi_backend SHARED${TORCH_SRC_DIR}/csrc/jit/backends/nnapi/nnapi_backend_lib.cpp${TORCH_SRC_DIR}/csrc/jit/backends/nnapi/nnapi_backend_preprocess.cpp)# Pybind11 requires explicit linking of the torch_python librarytarget_link_libraries(nnapi_backend PRIVATE torch torch_python pybind::pybind11)
endif()
總結一下,就是有nnapi_backend
-> torch_python
-> torch_python_stubs
-> torch/_C/__init__.pyi
, torch/_C/_VariableFunctions.pyi
, torch/nn/functional.pyi
間的層層依賴,所以要建構nnapi_backend
這個library時才會調用tools/pyi/gen_pyi.py
去生成.pyi
檔。
<中透過"${PYTHON_EXECUTABLE}" -_pyi
調用了tools/pyi/gen_pyi.py
, 它的功用是由.pyi.in
檔生成.pyi
檔。
def main() -> None:parser = argparse.ArgumentParser(description="Generate type stubs for PyTorch")parser.add_argument("--native-functions-path",metavar="NATIVE",default="aten/src/ATen/native/native_functions.yaml",help="path to native_functions.yaml",)parser.add_argument("--tags-path",metavar="TAGS",default="aten/src/ATen/native/tags.yaml",help="path to tags.yaml",)parser.add_argument("--deprecated-functions-path",metavar="DEPRECATED",default="tools/autograd/deprecated.yaml",help="path to deprecated.yaml",)parser.add_argument("--out", metavar="OUT", default=".", help="path to output directory")args = parser.parse_args()fm = FileManager(install_dir=args.out, template_dir=".", dry_run=False)gen_pyi(args.native_functions_path, args.tags_path, args.deprecated_functions_path, fm)if __name__ == "__main__":main()
gen_pyi.py
中的注釋:
- We start off with a hand-written __init__.pyi.in file. Thisfile contains type definitions for everything we cannot automaticallygenerate, including pure Python definitions directly in __init__.py(the latter case should be pretty rare).- We go through automatically bound functions based on thetype information recorded in native_functions.yaml andgenerate type hints for them (generate_type_hints)
native_functions.yaml
中記錄了自動綁定函數(automatically bound functions,猜測是Python與C++函數的綁定)的類型資訊,gen_pyi.py
會依據這些類型資訊用generate_type_hints
函數(待會會在unsorted_function_hints一節出現)生成類型提示。
tools/pyi/gen_pyi.py
這個函數的功用是由_C/__init__.pyi.in
, _C/_VariableFunctions.pyi.in
及torch/_C/return_types.pyi.in
生成_C/__init__.pyi
, _C/_VariableFunctions.pyi
, torch/_VF.pyi
和torch/return_types.pyi
。
def gen_pyi(native_yaml_path: str,tags_yaml_path: str,deprecated_yaml_path: str,fm: FileManager,
) -> None:"""gen_pyi()This function generates a pyi file for torch."""# ...
前三個參數預設為:
native_yaml_path
:aten/src/ATen/native/native_functions.yaml
tags_yaml_path
:aten/src/ATen/native/tags.yaml
deprecated_yaml_path
:tools/autograd/deprecated.yaml
fm
建構子中的兩個參數如下:
install_dir
:args.out
,也就是’.’template_dir
:‘.’解析native_functions.yaml
與tags.yaml
,得到native_functions
變數:
native_functions = parse_native_yaml(native_yaml_path, tags_yaml_path).native_functionsnative_functions = list(filter(should_generate_py_binding, native_functions))
native_functions
是一個NativeFunction
的列表,表示aten
命名空間裡的函數,其第零個元素如下:
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='_cast_Byte', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='non_blocking', type=BaseType(name=<BaseTy.bool: 9>), default='False', annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=9), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags=set())
代表rand
函數的元素如下,aten::rand
函數有六種overload name,分別為names
, generator_with_names
, 空字串
, generator
, out
, generator_out
。可與native_functions.yaml
交互參看:
- func: rand.names(SymInt[] size, *, Dimname[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensordevice_check: NoCheckdevice_guard: Falsedispatch:CompositeExplicitAutograd: randautogen: rand.names_outtags: nondeterministic_seeded
yaml檔中的autogen
欄位中有rand.names_out
,對照native_functions
中的元素,可以發現NativeFunction
的autogen
成員也有一個overload_name
為names_out
的OperatorName
。
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4254), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False,has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
- func: ator_with_names(SymInt[] size, *, Generator? generator, Dimname[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensordevice_check: NoCheckdevice_guard: Falsetags: nondeterministic_seededdispatch:CompositeExplicitAutograd: randautogen: ator_with_names_out
yaml檔中的autogen
欄位中有NativeFunction
的autogen
成員也有一個overload_name
為generator_with_names_out
的OperatorName
。
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None), Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None)), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4262), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
- func: rand(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensortags: nondeterministic_seededdispatch:CompositeExplicitAutograd: rand
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4270), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
- func: ator(SymInt[] size, *, Generator? generator, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensortags: nondeterministic_seededdispatch:CompositeExplicitAutograd: rand
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4275), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
- func: rand.out(SymInt[] size, *, Tensor(a!) out) -> Tensor(a!)tags: nondeterministic_seededdispatch:CompositeExplicitAutograd: rand_out
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4280), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
- func: ator_out(SymInt[] size, *, Generator? generator, Tensor(a!) out) -> Tensor(a!)tags: nondeterministic_seeded
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4285), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'})
因為rand.names
和native_functions.yaml
裡六個rand相關函數最後可以生成C++ aten
命名空間裡的八個函數。
傳入self
及other
,回傳結果的add
函數。
- func: add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensordevice_check: NoCheck # TensorIteratorstructured_delegate: add.outvariants: function, methoddispatch:SparseCPU, SparseCUDA: add_sparseSparseCsrCPU, SparseCsrCUDA: add_sparse_csrMkldnnCPU: mkldnn_addZeroTensor: add_zerotensorNestedTensorCPU, NestedTensorCUDA: NestedTensor_add_Tensortags: [canonical, pointwise]
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, &hod: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'})
直接修改self
參數的inplace版本。
- func: add_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> Tensor(a!)device_check: NoCheck # TensorIteratorvariants: methodstructured_delegate: add.outdispatch:SparseCPU, SparseCUDA: add_sparse_SparseCsrCPU, SparseCsrCUDA: add_sparse_csr_MkldnnCPU: mkldnn_add_NestedTensorCPU, NestedTensorCUDA: NestedTensor_add__Tensortags: pointwise
根據pytorch native README.md:
Tensor(a!) - members of a may be written to thus mutating the underlying data.
Tensor(a!) self
這個寫法表示self
參數同時是入參也是出參。
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=True, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=()))), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={&hod: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=509), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'})
有出參out
版本的add
函數。
- func: add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> Tensor(a!)device_check: NoCheck # TensorIteratorstructured: Truestructured_inherits: TensorIteratorBaseufunc_inner_loop:Generic: add (AllAndComplex, BFloat16, Half, ComplexHalf)ScalarOnly: add (Bool)dispatch:SparseCPU: add_out_sparse_cpuSparseCUDA: add_out_sparse_cudaSparseCsrCPU: add_out_sparse_csr_cpuSparseCsrCUDA: add_out_sparse_csr_cudaMkldnnCPU: mkldnn_add_outMPS: add_out_mpstags: pointwise
NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'})
function_signatures = load_signatures(native_functions, deprecated_yaml_path, method=False, pyi=True)
function_signatures
是一個PythonSignatureNativeFunctionPair
的列表,其第零個元素如下:
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='_cast_Byte', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='non_blocking', type=BaseType(name=<BaseTy.bool: 9>), default='False', default_init=None)), input_kwargs=(), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='_cast_Byte', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='non_blocking', type=BaseType(name=<BaseTy.bool: 9>), default='False', annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=9), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags=set()))
代表rand
函數的元素如下,共六個,可與剛才的native_functions
一一對應:
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name='names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4254),
autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='names_out')],
ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
注意names
會生成names_out
函數。
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None), PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None)), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name='generator_with_names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None), Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None)), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4262),
autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names_out')],
ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
注意generator_with_names
會生成generator_with_names_out
函數。
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4270),
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name='generator'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4275),
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4280),
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name='generator_out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4285),
autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
代表add
函數的元素如下,共三個,也可與剛才的native_functions
一一對應。
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, &hod: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'}))
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='add_', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=True, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=()))), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={&hod: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=509), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))
PythonSignatureNativeFunctionPair(signature=PythonSignature(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False), function=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))
sig_groups
是一個PythonSignatureGroup
的列表,PythonSignatureGroup
則是由PythonSignature
和NativeFunction
組成的pair。
PythonSignatureGroup
跟PythonSignatureNativeFunctionPair
比起來多了一個outplace
。
sig_groups = get_py_torch_functions(function_signatures)
sig_groups
的第零個元素如下:
PythonSignatureGroup(signature=PythonSignature(name='__and__', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='and', inplace=False, dunder_method=True, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={&hod: 2>, <Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=7635), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags=set()), outplace=None)
代表rand
函數的四個元素如下。原先的八個函數依據有沒有out被整理成兩兩一對,共四對。
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None), PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None)), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None), Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None)), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4262), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False), overload_name='generator_with_names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=None)
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name='generator'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4275), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name='generator_out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='generator', type=OptionalType(elem=BaseType(name=<BaseTy.Generator: 1>)), default=None, annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4285), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=False, has_composite_implicit_autograd_kernel=True, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(PythonArgument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, default_init=None),), output_args=None, returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name='names'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='names', type=OptionalType(elem=ListType(elem=BaseType(name=<BaseTy.Dimname: 5>), size=None)), default=None, annotation=None),), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=False, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4254), autogen=[OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name='names_out')], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=None)
PythonSignatureGroup(signature=PythonSignature(name='rand', input_args=(PythonArgument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, default_init=None),), input_kwargs=(), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(PythonArgument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', default_init=None), PythonArgument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', default_init=None), PythonArgument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', default_init='torch::tensors::get_default_device()'), PythonArgument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None), PythonArgument(name='requires_grad', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='False', default_init=None)), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=TensorOptionsArguments(dtype=Argument(name='dtype', type=OptionalType(elem=BaseType(name=<BaseTy.ScalarType: 2>)), default='None', annotation=None), layout=Argument(name='layout', type=OptionalType(elem=BaseType(name=<BaseTy.Layout: 10>)), default='None', annotation=None), device=Argument(name='device', type=OptionalType(elem=BaseType(name=<BaseTy.Device: 11>)), default='None', annotation=None), pin_memory=Argument(name='pin_memory', type=OptionalType(elem=BaseType(name=<BaseTy.bool: 9>)), default='None', annotation=None)), post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4270), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='rand', inplace=False, dunder_method=False, functional_overload=False),
overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=None, post_self_positional=(Argument(name='size', type=ListType(elem=BaseType(name=<BaseTy.SymInt: 17>), size=None), default=None, annotation=None),), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.ExactSame: 1>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=4280), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=None, structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=True, has_composite_explicit_autograd_non_functional_kernel=False, tags={'nondeterministic_seeded'}))
上面共有四個PythonSignatureGroup
元素,先來看第一個元素,其base
成員的func
的overload_name
為generator_with_names
,autogen
的overload_name
則為generator_with_names_out
。第二個的則分別為generator
和generator_out
。第三個的分別為names
和names_out
。第四個的分別為空字串
和out
。
到這裡得到了八個rand
相關函數。
PythonSignatureGroup(signature=PythonSignature(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', default_init=None),), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, &hod: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))
PythonSignatureGroup(signature=PythonSignatureDeprecated(name='add', input_args=(PythonArgument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None), PythonArgument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default=None, default_init=None), PythonArgument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None)), input_kwargs=(), output_args=PythonOutArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default='None', default_init=None, outputs=(PythonArgument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, default_init=None),)), returns=PythonReturns(returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), tensor_options_args=(), method=False, deprecated_schema=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name=''), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default=None, annotation=None), Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), pre_tensor_options_kwarg_only=(), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), deprecated_args_exprs=('out', 'self', 'other', 'alpha')), base=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='Tensor'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=()), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=None),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>, &hod: 2>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=497), autogen=[], ufunc_inner_loop={}, structured=False, structured_delegate=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), structured_inherits=None, precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise', 'canonical'}), outplace=NativeFunction(namespace='aten', func=FunctionSchema(name=OperatorName(name=BaseOperatorName(base='add', inplace=False, dunder_method=False, functional_overload=False), overload_name='out'), arguments=Arguments(pre_self_positional=(), self_arg=SelfArgument(argument=Argument(name='self', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None)), post_self_positional=(Argument(name='other', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=None),), pre_tensor_options_kwarg_only=(Argument(name='alpha', type=BaseType(name=<BaseTy.Scalar: 12>), default='1', annotation=None),), tensor_options=None, post_tensor_options_kwarg_only=(), out=(Argument(name='out', type=BaseType(name=<BaseTy.Tensor: 3>), default=None, annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), returns=(Return(name=None, type=BaseType(name=<BaseTy.Tensor: 3>), annotation=Annotation(alias_set=('a',), is_write=True, alias_set_after=())),)), use_const_ref_for_mutable_tensors=False, device_guard=True, device_check=<DeviceCheckType.NoCheck: 0>, python_module=None, category_override=None, variants={<Variant.function: 1>}, manual_kernel_registration=False, manual_cpp_binding=False, loc=Location(file='aten/src/ATen/native/native_functions.yaml', line=520), autogen=[], ufunc_inner_loop={<UfuncKey.Generic: 7>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7910>, ufunc_key=<UfuncKey.Generic: 7>), <UfuncKey.ScalarOnly: 6>: UfuncInnerLoop(name='add', supported_dtypes=<torchgen.utils.OrderedSet object at 0x7f600cff7b80>, ufunc_key=<UfuncKey.ScalarOnly: 6>)}, structured=True, structured_delegate=None, structured_inherits='TensorIteratorBase', precomputed=None, cpp_no_default_args=set(), is_abstract=True, has_composite_implicit_autograd_kernel=False, has_composite_implicit_autograd_nested_tensor_kernel=False, has_composite_explicit_autograd_kernel=False, has_composite_explicit_autograd_non_functional_kernel=False, tags={'pointwise'}))
for group in sorted(sig_groups, key=lambda g: g.signature.name):name = group.signature.nameunsorted_function_hints[name] += generate_type_hints(group)named_tuple = returns_named_tuple_pyi(group.signature)if named_tuple is not None and not group.signature.deprecated:# deprecated namedtuples are currently not included for torch functionstuple_name, tuple_def = named_tupleif tuple_name in namedtuples:assert namedtuples[tuple_name] == tuple_defelse:namedtuples[tuple_name] = tuple_def
unsorted_function_hints
是一個defaultdict
,key為函數名稱,value則為list of string。
代表rand
函數的元素如下:
'rand': ['def rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(size: Sequence[Union[_int, SymInt]], *, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(size: Sequence[Union[_int, SymInt]], *, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...', 'def rand(*size: _int, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...']
上面是八個overload的rand
函數,可以將它們分為四組:有generator及names參數,只有generator參數,只有name參數,沒有generator和names參數。每組又可分為size參數是Sequence
的及是int
的。到這裡已經可以與torch/_C/_VariableFunctions.pyi
一 一對應了。
找到名為add
的key,其value list共有三個元素,分別對應add.Tensor
,add_.Tensor
,add.out
。
'def add(input: Union[Tensor, Number], other: Union[Tensor, Number], *, alpha: Optional[Number]=1, out: Optional[Tensor]=None) -> Tensor: ...'
'def add(self: Tensor, alpha: Number, other: Tensor) -> Tensor: ...'
'def add(self: Tensor, alpha: Number, other: Tensor, *, out: Tensor) -> Tensor: ...'
function_hints = []for name, hints in sorted(unsorted_function_hints.items()):if len(hints) > 1:hints = ["@overloadn" + h for h in hints]function_hints += hints
function_hints
是一個list of string:
['@overloadndef __and_...ensor: ...', '@overloadndef __and_...ensor: ...', '@overloadndef __sor: ...', '@overloadndef __sor: ...', '@overloadndef __or__...ensor: ...', '@overloadndef __or__...ensor: ...', '@overloadndef __sor: ...', '@overloadndef __sor: ...', '@overloadndef __xor_...ensor: ...', '@overloadndef __xor_...ensor: ...', 'def _adaptive_sor: ...', 'def _adaptive_sor: ...', 'def _add_batch_sor: ...', '@overloadndef _sor: ...', ...]
其第零個元素如下:
'@overloadndef __and__(input: Tensor, other: Tensor) -> Tensor: ...'
代表rand
函數的八個元素如下。其實跟unsorted_function_hints
裡的大同小異,差別只在於前面多加了’@overloadn’。
'@overloadndef rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overloadndef rand(*size: _int, generator: Optional[Generator], names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overloadndef rand(size: Sequence[Union[_int, SymInt]], *, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overloadndef rand(*size: _int, generator: Optional[Generator], out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overloadndef rand(size: Sequence[Union[_int, SymInt]], *, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overloadndef rand(*size: _int, names: Optional[Sequence[Union[str, ellipsis, None]]], dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overloadndef rand(size: Sequence[Union[_int, SymInt]], *, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
'@overloadndef rand(*size: _int, out: Optional[Tensor]=None, dtype: Optional[_dtype]=None, layout: Optional[_layout]=None, device: Optional[Union[_device, str, None]]=None, pin_memory: Optional[_bool]=False, requires_grad: Optional[_bool]=False) -> Tensor: ...'
代表add
函數的三個元素如下:
'@overloadndef add(input: Union[Tensor, Number], other: Union[Tensor, Number], *, alpha: Optional[Number]=1, out: Optional[Tensor]=None) -> Tensor: ...'
'@overloadndef add(self: Tensor, alpha: Number, other: Tensor) -> Tensor: ...'
'@overloadndef add(self: Tensor, alpha: Number, other: Tensor, *, out: Tensor) -> Tensor: ...'
# Generate __all__ directive# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# Include only the functions that contain hints, to prevent undefined# symbols to be included in the `__all__` directive.hinted_function_names = [name for name, hint in unsorted_function_hints.items() if hint]
hinted_function_names
是一個list of string,看起來就只是一個有hint的函數名稱的列表:
['sparse_csr_tensor', '_sparse_csr_tensor_unsafe', 'sparse_csc_tensor', '_sparse_csc_tensor_unsafe', 'sparse_bsr_tensor', '_sparse_bsr_tensor_unsafe', 'sparse_bsc_tensor', '_sparse_bsc_tensor_unsafe', 'set_flush_denormal', 'get_default_dtype', 'asarray', 'from_numpy', 'frombuffer', 'numel', ...]
其中也包含了:
'rand', 'rand_like', 'randint_like', 'randn', 'randn_like', 'randperm'
及'add'
。
all_symbols = sorted(list(namedtuples.keys()) + hinted_function_names)
all_symbols
['__and__', '__lshift__', '__or__', '__rshift__', '__xor__', '_adaptive_avg_pool2d', '_adaptive_avg_pool3d', '_add_batch_dim', '_add_relu', '_add_relu_', '_addmm_activation', '_aminmax', '_amp_foreach_d_unscale_', '_amp_update_scale_', ...]
其中也包含了:
'rand', 'rand_like', 'randint_like', 'randn', 'randn_like', 'randperm'
及'add'
。
接下來將all_symbols
轉為string,用n
切割成多段個組成一個字串的列表,即為all_directive
:
all_directive = pformat(all_symbols, width=100, compact=True).split("n")all_directive[0] = "__all__ = {}".format(all_directive[0])
第零個元素如下:
"__all__ = ['__and__', '__lshift__', '__or__', '__rshift__', '__xor__', '_adaptive_avg_pool2d',"
其中包含add
的元素如下:
" 'adaptive_max_pool1d', 'add', 'addbmm', 'addcdiv', 'addcmul', 'addmm', 'addmv', 'addmv_', 'addr',"
包含rand
的元素如下:
" 'rad2deg_', 'rand', 'rand_like', 'randint', 'randint_like', 'randn', 'randn_like', 'randperm',"
最後一個元素如下:
" 'vsplit', 'vstack', 'where', 'xlogy', 'xlogy_', 'zero_', 'zeros', 'zeros_like']"
到這裡為止得到了function_hints
和all_directive
,這兩個變數與其它變數共同組成了env
:
env = {"namedtuple_defs": namedtuple_defs,"function_hints": function_hints,"tensor_method_hints": tensor_method_hints,"legacy_class_hints": legacy_class_hints,"legacy_storage_base_hints": legacy_storage_base_hints,"dtype_class_hints": dtype_class_hints,"dispatch_key_hints": dispatch_key_hints,"all_directive": all_directive,}
運算後的env
如下:
{"namedtuple_defs":["_fake_quantize_ Tensor)])","_fused_moving_ Tensor)])","_linalg_det = Tensor)])","_linalg_eigh = Tensor)])","_linalg_slogdet = Na... Tensor)])","_linalg_solve_ex = N... Tensor)])","_linalg_svd = Tensor)])","_lu_with_info = Tensor)])","_unpack_dual = Tensor)])","..."],"function_hints":["@overloadndef __and_...ensor: ...","@overloadndef __and_...ensor: ...","@overloadndef __sor: ...","@overloadndef __sor: ...","@overloadndef __or__...ensor: ...","@overloadndef __or__...ensor: ...","@overloadndef __sor: ...","@overloadndef __sor: ...","@overloadndef __xor_...ensor: ...","..."],"tensor_method_hints":["def __abs__(self) ->...ensor: ...","def __add__(self, ot...ensor: ...","@overloadndef __and_...ensor: ...","@overloadndef __and_...ensor: ...","@overloadndef __and_...ensor: ...","def __bool__(self) -....bool: ...","def __complex__(plex: ...","def __div__(self, ot...ensor: ...","def __eq__(self, [override]","..."],"legacy_class_hints":["class sor): ...","class FloatTensor(Tensor): ...","class LongTensor(Tensor): ...","class IntTensor(Tensor): ...","class ShortTensor(Tensor): ...","class HalfTensor(Tensor): ...","class CharTensor(Tensor): ...","class ByteTensor(Tensor): ...","class BoolTensor(Tensor): ..."],"legacy_storage_base_hints":["class StorageBase(object): ..."],"dtype_class_hints":["float32: dtype = ...","float: dtype = ...","float64: dtype = ...","double: dtype = ...","float16: dtype = ...","bfloat16: dtype = ...","half: dtype = ...","uint8: dtype = ...","int8: dtype = ...","..."],"dispatch_key_hints":["Undefined: DispatchKey = ...","FPGA: DispatchKey = ...","ORT: DispatchKey = ...","Vulkan: DispatchKey = ...","Metal: DispatchKey = ...","MKLDNN: DispatchKey = ...","OpenGL: DispatchKey = ...","OpenCL: DispatchKey = ...","IDEEP: DispatchKey = ...","..."],"all_directive":["__all__ = ['__and__...,"," ...,"," '_aminmax', ...,"," ...,"," '_cast_Float', ...,"," ...,"," ...,"," ...,"," '_,","..."]
}
接著把env
傳入FileManager
的成員函數write_with_template
:
# ...fm.write_with_template("torch/_C/__init__.pyi","torch/_C/__init__.pyi.in",lambda: {"generated_comment": "@" + "generated from torch/_C/__init__.pyi.in",**env,},)fm.write_with_template("torch/_C/_VariableFunctions.pyi","torch/_C/_VariableFunctions.pyi.in",lambda: {"generated_comment": "@"+ "generated from torch/_C/_VariableFunctions.pyi.in",**env,},)fm.write_with_template("torch/_VF.pyi","torch/_C/_VariableFunctions.pyi.in",lambda: {"generated_comment": "@"+ "generated from torch/_C/_VariableFunctions.pyi.in",**env,},)fm.write_with_template("torch/return_types.pyi","torch/_C/return_types.pyi.in",lambda: {"generated_comment": "@" + "generated from torch/_C/return_types.pyi",**env,},)gen_nn_functional(fm)
可以看到這段代碼裡調用了FileManager
的write_with_template
及gen_nn_functional
,以下先看gen_nn_functional
。
參考Unpacking Operators in Python的Merging Dictionaries章節,{"a": 1, **my_dict}
這種寫法是先把my_dict
拆開(unpacking),然後再與"a": 1
共同構成一個新的字典。
lambda: {}
這種寫法則表示一個無需參數並回傳一個字典的lambda函數。
注意到在呼叫write_with_template
時,最後一個參數後面多了一個,
。根據Should I add a trailing comma after the last argument in a function call? [closed],在呼叫函數時,如果參數是分多行寫的,比較建議的寫法是在最後加上一個,
。
回想一開始看到的,共會由六個pyi.in
檔生成六個對應的pyi
檔,此處只生成了四個pyi
檔,剩下兩個(functional.pyi
,_nn.pyi
)則是在gen_nn_functional
中調用FileManager.write_with_template
生成。
FileManager.write_with_template
函數會以模板為基礎,按照替換函數所指定的方式,生成pyi
檔,本函數已獨立成篇,詳見PyTorch檔案生成機制中的FileManager.write_with_template。
gen_nn_functional
函數同樣位於tools/pyi/gen_pyi.py
,它的作用是由torch/nn/functional.pyi.in
及torch/_C/_nn.pyi.in
生成torch/nn/functional.pyi
和torch/_C/_nn.pyi.in
。
def gen_nn_functional(fm: FileManager) -> None:# Functions imported into functional` from `torch`, perhaps being filtered# through an `_add_docstr` callimports = ["conv1d","conv2d","conv3d","conv_transpose1d","conv_transpose2d","conv_transpose3d","conv_tbc","avg_pool1d","relu_","selu_","celu_","rrelu_","pixel_shuffle","pixel_unshuffle","channel_shuffle","native_channel_shuffle","pdist","cosine_similarity",]# Functions generated by `torch._jit_internal.boolean_dispatch`dispatches = ["fractional_max_pool2d","fractional_max_pool3d","max_pool1d","max_pool2d","max_pool3d","adaptive_max_pool1d","adaptive_max_pool2d","adaptive_max_pool3d",]# Functions directly imported from `torch._C`from_c = ["avg_pool2d","avg_pool3d","hardtanh_","elu_","leaky_relu_","logsigmoid","softplus","softshrink","one_hot",]import_code = ["from .. import {0} as {0}".format(_) for _ in imports]# TODO make these types more precisedispatch_code = ["{}: Callable".format(_) for _ in (dispatches + from_c)]fm.write_with_template("torch/nn/functional.pyi","torch/nn/functional.pyi.in",lambda: {"imported_hints": import_code,"dispatched_hints": dispatch_code,},)# functional.pyi already contains the definitions for those functions# so, we don't export then to d(["hardtanh", "leaky_relu", "hardsigmoid"])dispatch_code = ["{}: Callable".format(_) for _ in (dispatches + from_c)]fm.write_with_template("torch/_C/_nn.pyi","torch/_C/_nn.pyi.in",lambda: {"imported_hints": import_code,"dispatched_hints": dispatch_code,},)
可以看到這個函數最後也是調用FileManager
的write_with_template
生成.pyi
檔。
write_with_template
已獨立成篇,詳見PyTorch檔案生成機制中的FileManager.write_with_template。
回頭看<:
file(GLOB_RECURSE datapipe_files "${TORCH_SRC_DIR}/utils/data/datapipes/*.py")
add_custom_command(OUTPUT"${TORCH_SRC_DIR}/utils/data/datapipes/datapipe.pyi"COMMAND"${PYTHON_EXECUTABLE}" ${TORCH_SRC_DIR}/utils/data/datapipes/gen_pyi.pyDEPENDS"${TORCH_SRC_DIR}/utils/data/datapipes/datapipe.pyi.in"${datapipe_files}WORKING_DIRECTORY"${TORCH_ROOT}"
)
datapipe.pyi
也是由類似的方式透過utils/data/datapipes/gen_pyi.py
由datapipe.pyi.in
生成的。
torch/utils/data/datapipes/datapipe.pyi.in
中的注釋:
# This base template ("datapipe.pyi.in") is generated from mypy stubgen with minimal editing for code injection
# The output file will be "datapipe.pyi". This is executed as part of
# Note that, for mypy, .pyi file takes precedent over .py file, such that we must define the interface for other
# classes/objects here, even though we are not injecting extra code into them at the moment.
以torch/_C/_VariableFunctions.pyi.in
為例:
generated_comment
# ${generated_comment}
被替換成:
# @generated from torch/_C/_VariableFunctions.pyi.in
function_hints
${function_hints}
被替換成:
@overload
def __and__(input: Tensor, other: Tensor) -> Tensor: ...
# ...
def zeros_like(input: Tensor, *, memory_format: Optional[memory_format] = None, dtype: Optional[_dtype] = None, layout: Optional[_layout] = None, device: Optional[Union[_device, str, None]] = None, pin_memory: Optional[_bool] = False, requires_grad: Optional[_bool] = False) -> Tensor: ...
all_directive
${all_directive}
被替換成:
__all__ = ['__and__', '__lshift__', '__or__', '__rshift__', '__xor__', '_adaptive_avg_pool2d',
# ...'view_copy', 'vsplit', 'vstack', 'where', 'xlogy', 'xlogy_', 'zero_', 'zeros', 'zeros_like']
其餘部份皆與torch/_C/_VariableFunctions.pyi.in
相同。
在torch/__init__.py
中有以下這麼一段:
# Appease the type checker: it can't deal with direct setting of globals().
# Note that we will see "too many" functions when reexporting this way; there
# is not a good way to fix this problem. Perhaps, try to redesign VariableFunctions
# so that this import is good enough
if TYPE_CHECKING:# Some type signatures pulled in from _VariableFunctions here clash with# signatures already imported. For now these clashes are ignored; see# PR #43339 for details.from torch._C._VariableFunctions import * # type: ignore[misc] # noqa: F403
也就是說,在類型檢查功能有被開啟的情況下,會引入torch._C.VariableFunctions
中的所有東西。
其中torch._C._VariableFunctions
指的就是我們剛剛看到的torch/_C/_VariableFunctions.pyi
。
根據pyi文件是干嘛的?(一文读懂Python的存根文件和类型检查),在py檔和pyi檔名稱相同且置於同一資料夾的情況下不需要import pyi檔類型檢查就會啟動。在此處是因為py檔和pyi檔名稱不同,所以才要手動import pyi?
本文发布于:2024-01-31 06:04:48,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/170665228926076.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |