fix: downgrade PyTorch CUDA version from cu129 to cu126#407
fix: downgrade PyTorch CUDA version from cu129 to cu126#407ryan-steed-usa wants to merge 1 commit intoremsky:masterfrom
Conversation
- Updated GPU dependency from torch==2.8.0+cu129 to torch==2.8.0+cu126 in pyproject.toml - Changed PyTorch CUDA index URL from https://download.pytorch.org/whl/cu129 to https://download.pytorch.org/whl/cu126 - This change ensures compatibility with CUDA 12.6 runtime while maintaining the same PyTorch version (2.8.0)
|
Closes #406 |
|
Hey @ryan-steed-usa, is this still a draft? |
|
Hi @remsky, I was hoping for confirmation from a Maxwell or Pascal CUDA user but everything seems to work containerized with my Ada Lovelace GPUs. Otherwise I think it's ready to go. |
|
Pascal user here. I can confirm my container crashes on |
|
Thanks for the feedback.
I agree, unless @remsky prefers to maintain a unified image in which case this workaround should accommodate everyone (for a while anyway). If we want to maintain a separate tag, we might also consider downgrading the entire base image. |
|
Thats a great idea. I have an optimization to the build stages on the nvidia image I was planning to push, can take a look to tag by torch versions and roll this in |
This change might restore support for Maxwell and Pascal architectures.