Skip to content

Conversation

@marcoloco23
Copy link

Checklist

  • One link per Pull Request
  • PR title format: Add project-name
  • Entry format: * [project-name](url) - Description ending with period.
  • Description is concise (no mention of "Python")

Why This Project Is Awesome

Which criterion does it meet? (pick one)

  • Industry Standard - The go-to tool for a specific use case
  • Rising Star - 5,000+ stars in <2 years, significant adoption
  • Hidden Gem - Exceptional quality, solves niche problems elegantly

Explain:

dimtensor is the only units library that provides native integration with PyTorch (autograd, GPU) and JAX (JIT, vmap, grad). It catches dimensional errors at operation time, preventing costly bugs in physics simulations and scientific ML. Built-in uncertainty propagation and support for 6+ I/O formats (HDF5, NetCDF, Parquet, etc.) make it production-ready for scientific workflows.

How It Differs

Unlike Pint or Astropy units, dimtensor:

  • Has native PyTorch autograd support (gradients flow through unit-aware operations)
  • Works with JAX transformations (jit, vmap, grad) via pytree registration
  • Supports GPU acceleration (CUDA, MPS) while preserving units
  • Includes built-in uncertainty propagation through all operations

It fills a gap for ML researchers and physicists who need dimensional safety in their PyTorch/JAX workflows.

@vinta
Copy link
Owner

vinta commented Jan 9, 2026

Thanks for submitting dimtensor! While the concept of unit-aware tensors for PyTorch/JAX sounds promising, we have a few concerns:

  • The repository was created yesterday and has 0 stars, making it difficult to assess real-world adoption or quality
  • "Hidden Gem" submissions require demonstrated exceptional quality, which typically needs time to establish

We encourage you to resubmit once the project has matured and gained some community validation. Best of luck with the project!

@vinta vinta closed this Jan 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants