-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[easy] Invalid call to aoti_torch_tensor_copy_ #123039
Labels
module: aotinductor
aot inductor
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Comments
desertfire
added
module: aotinductor
aot inductor
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
labels
Mar 30, 2024
desertfire
changed the title
[easy] test_multi_device_cuda
[easy] Invalid call to aoti_torch_tensor_copy_
Apr 3, 2024
trieuat
added a commit
to trieuat/pytorch
that referenced
this issue
Apr 15, 2024
trieuat
added a commit
to trieuat/pytorch
that referenced
this issue
Apr 15, 2024
trieuat
added a commit
to trieuat/pytorch
that referenced
this issue
Apr 15, 2024
pytorchmergebot
added a commit
that referenced
this issue
Apr 22, 2024
This reverts commit 6e24cc0. Reverted #124037 on behalf of https://github.com/jeanschmidt due to seems to have introduced a regression in pull / linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, linux.4xlarge.nvidia.gpu) ([comment](#124037 (comment)))
pytorchmergebot
pushed a commit
that referenced
this issue
Apr 26, 2024
fixes #123039 In abi mode, ExternKernelSchedulerNode generates code using `aoti_torch_tensor_copy_` which requires `AtenTensorHandle`, but the allocation generates ArrayRefTensor to allocate mem in stack. To fix this issue, this PR prevents ExternKernelSchedulerNode from using stack-mem-allocation in abi, and creates AtenTensorHandle instead. Pull Request resolved: #124037 Approved by: https://github.com/desertfire
pytorchmergebot
added a commit
that referenced
this issue
Apr 26, 2024
This reverts commit f9379eb. Reverted #124037 on behalf of https://github.com/jeanschmidt due to introducing regressions in benchmark, see D56623194 for more details ([comment](#124037 (comment)))
carmocca
pushed a commit
to carmocca/pytorch
that referenced
this issue
Apr 29, 2024
…24037) fixes pytorch#123039 In abi mode, ExternKernelSchedulerNode generates code using `aoti_torch_tensor_copy_` which requires `AtenTensorHandle`, but the allocation generates ArrayRefTensor to allocate mem in stack. To fix this issue, this PR prevents ExternKernelSchedulerNode from using stack-mem-allocation in abi, and creates AtenTensorHandle instead. Pull Request resolved: pytorch#124037 Approved by: https://github.com/desertfire
carmocca
pushed a commit
to carmocca/pytorch
that referenced
this issue
Apr 29, 2024
…ytorch#124037)" This reverts commit f9379eb. Reverted pytorch#124037 on behalf of https://github.com/jeanschmidt due to introducing regressions in benchmark, see D56623194 for more details ([comment](pytorch#124037 (comment)))
andoorve
pushed a commit
to andoorve/pytorch
that referenced
this issue
May 1, 2024
…24037) fixes pytorch#123039 In abi mode, ExternKernelSchedulerNode generates code using `aoti_torch_tensor_copy_` which requires `AtenTensorHandle`, but the allocation generates ArrayRefTensor to allocate mem in stack. To fix this issue, this PR prevents ExternKernelSchedulerNode from using stack-mem-allocation in abi, and creates AtenTensorHandle instead. Pull Request resolved: pytorch#124037 Approved by: https://github.com/desertfire
andoorve
pushed a commit
to andoorve/pytorch
that referenced
this issue
May 1, 2024
…ytorch#124037)" This reverts commit 6e24cc0. Reverted pytorch#124037 on behalf of https://github.com/jeanschmidt due to seems to have introduced a regression in pull / linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, linux.4xlarge.nvidia.gpu) ([comment](pytorch#124037 (comment)))
andoorve
pushed a commit
to andoorve/pytorch
that referenced
this issue
May 1, 2024
…24037) fixes pytorch#123039 In abi mode, ExternKernelSchedulerNode generates code using `aoti_torch_tensor_copy_` which requires `AtenTensorHandle`, but the allocation generates ArrayRefTensor to allocate mem in stack. To fix this issue, this PR prevents ExternKernelSchedulerNode from using stack-mem-allocation in abi, and creates AtenTensorHandle instead. Pull Request resolved: pytorch#124037 Approved by: https://github.com/desertfire
andoorve
pushed a commit
to andoorve/pytorch
that referenced
this issue
May 1, 2024
…ytorch#124037)" This reverts commit f9379eb. Reverted pytorch#124037 on behalf of https://github.com/jeanschmidt due to introducing regressions in benchmark, see D56623194 for more details ([comment](pytorch#124037 (comment)))
petrex
pushed a commit
to petrex/pytorch
that referenced
this issue
May 3, 2024
…24037) fixes pytorch#123039 In abi mode, ExternKernelSchedulerNode generates code using `aoti_torch_tensor_copy_` which requires `AtenTensorHandle`, but the allocation generates ArrayRefTensor to allocate mem in stack. To fix this issue, this PR prevents ExternKernelSchedulerNode from using stack-mem-allocation in abi, and creates AtenTensorHandle instead. Pull Request resolved: pytorch#124037 Approved by: https://github.com/desertfire
petrex
pushed a commit
to petrex/pytorch
that referenced
this issue
May 3, 2024
…ytorch#124037)" This reverts commit 6e24cc0. Reverted pytorch#124037 on behalf of https://github.com/jeanschmidt due to seems to have introduced a regression in pull / linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, linux.4xlarge.nvidia.gpu) ([comment](pytorch#124037 (comment)))
pytorch-bot bot
pushed a commit
that referenced
this issue
May 3, 2024
fixes #123039 In abi mode, ExternKernelSchedulerNode generates code using `aoti_torch_tensor_copy_` which requires `AtenTensorHandle`, but the allocation generates ArrayRefTensor to allocate mem in stack. To fix this issue, this PR prevents ExternKernelSchedulerNode from using stack-mem-allocation in abi, and creates AtenTensorHandle instead. Pull Request resolved: #124037 Approved by: https://github.com/desertfire
petrex
pushed a commit
to petrex/pytorch
that referenced
this issue
May 3, 2024
…ytorch#124037)" This reverts commit f9379eb. Reverted pytorch#124037 on behalf of https://github.com/jeanschmidt due to introducing regressions in benchmark, see D56623194 for more details ([comment](pytorch#124037 (comment)))
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
module: aotinductor
aot inductor
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Repro:
Comment out
pytorch/test/inductor/test_cuda_cpp_wrapper.py
Line 111 in e203aa9
Error:
The text was updated successfully, but these errors were encountered: