-
-
Notifications
You must be signed in to change notification settings - Fork 1
Add unified docker #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a unified Docker architecture for the Quantum Computing 101 project, consolidating multiple separate Dockerfiles and build scripts into a single, maintainable solution while updating the requirements structure for better organization.
- Unified multi-stage Dockerfile supporting CPU, NVIDIA GPU, and AMD GPU variants
- Consolidated build system with single build script replacing multiple variant-specific scripts
- Reorganized requirements files with clearer documentation and improved dependency management
Reviewed Changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
setup.py |
Added filtering for -r requirements include directives |
examples/requirements*.txt |
Enhanced documentation headers explaining purpose and usage context |
docker/Dockerfile |
New unified multi-stage Dockerfile replacing 4 separate variant files |
docker/build-unified.sh |
New consolidated build script with improved hardware detection |
docker/entrypoint.sh |
New unified entrypoint script for all container variants |
docker/requirements/*.txt |
Updated requirement files with better documentation and structure |
docker/README.md |
Comprehensive documentation rewrite covering new unified architecture |
| Legacy files | Removed old Dockerfiles and build scripts (consolidated into unified approach) |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| def read_requirements(filename): | ||
| with open(os.path.join("examples", filename), "r", encoding="utf-8") as fh: | ||
| return [line.strip() for line in fh if line.strip() and not line.startswith("#")] | ||
| return [ | ||
| line.strip() | ||
| for line in fh | ||
| if line.strip() | ||
| and not line.startswith("#") | ||
| and not line.startswith("-r") |
Copilot
AI
Oct 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The filtering logic excludes -r (include) directives but doesn't handle requirements files that might legitimately use includes. Consider adding a comment explaining why -r directives are filtered out, or make this configurable if the filtering behavior should be conditional.
| VARIANT="${1:-cpu}" | ||
| NO_CACHE="${2:-false}" | ||
| PYTORCH_VERSION="2.8.0" | ||
| CUDA_VERSION="12.9" | ||
| CUDNN_VERSION="9" | ||
|
|
Copilot
AI
Oct 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The script relies on positional parameter ordering which can be fragile. Consider using proper argument parsing (getopts) to handle options like --no-cache more robustly, especially since the usage examples show --no-cache as a flag rather than a positional argument.
| VARIANT="${1:-cpu}" | |
| NO_CACHE="${2:-false}" | |
| PYTORCH_VERSION="2.8.0" | |
| CUDA_VERSION="12.9" | |
| CUDNN_VERSION="9" | |
| VARIANT="cpu" | |
| NO_CACHE="false" | |
| PYTORCH_VERSION="2.8.0" | |
| CUDA_VERSION="12.9" | |
| CUDNN_VERSION="9" | |
| # ----------------------------------------------------------------------------- | |
| # Argument parsing | |
| # ----------------------------------------------------------------------------- | |
| while [[ $# -gt 0 ]]; do | |
| case "$1" in | |
| --no-cache) | |
| NO_CACHE="true" | |
| shift | |
| ;; | |
| cpu|nvidia|amd) | |
| VARIANT="$1" | |
| shift | |
| ;; | |
| -*) | |
| print_warning "Unknown option: $1" | |
| shift | |
| ;; | |
| *) | |
| print_warning "Unknown argument: $1" | |
| shift | |
| ;; | |
| esac | |
| done |
| RUN sed '/^qiskit-aer/d' /tmp/requirements/base.txt > /tmp/requirements/base-no-aer.txt && \ | ||
| pip install -r /tmp/requirements/base-no-aer.txt |
Copilot
AI
Oct 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sed command to exclude qiskit-aer from base requirements is duplicated between the NVIDIA and AMD stages. Consider extracting this logic to a shared stage or create a separate base-no-aer.txt file to reduce code duplication.
| # Detect variant | ||
| if command -v nvidia-smi &> /dev/null; then | ||
| echo "Platform: NVIDIA CUDA GPU" | ||
| echo "CUDA Version: $(nvcc --version | grep "release" | awk '{print $6}' | cut -c2- || echo 'N/A')" |
Copilot
AI
Oct 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CUDA version extraction command is complex and fragile. Consider using nvidia-smi --query-gpu=driver_version --format=csv,noheader,nounits or a more robust parsing method to extract version information.
| echo "CUDA Version: $(nvcc --version | grep "release" | awk '{print $6}' | cut -c2- || echo 'N/A')" | |
| echo "CUDA Version: $(nvidia-smi --query-gpu=driver_version --format=csv,noheader,nounits 2>/dev/null | head -n1 || nvcc --version | grep "release" | awk '{print $6}' | cut -c2- || echo 'N/A')" |
| nvidia-smi --query-gpu=name,memory.total --format=csv,noheader 2>/dev/null || echo "GPU info not available" | ||
|
|
||
| # Check PyTorch CUDA | ||
| python -c "import torch; print(f'PyTorch CUDA Available: {torch.cuda.is_available()}')" 2>/dev/null || true |
Copilot
AI
Oct 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both CUDA and ROCm variants use torch.cuda.is_available() to check GPU availability. For ROCm, this should use ROCm-specific detection methods or at minimum clarify in the output that cuda.is_available() also returns true for ROCm devices.
| rocm-smi --showproductname 2>/dev/null || echo "ROCm GPU detected" | ||
|
|
||
| # Check PyTorch ROCm | ||
| python -c "import torch; print(f'PyTorch ROCm Available: {torch.cuda.is_available()}')" 2>/dev/null || true |
Copilot
AI
Oct 1, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both CUDA and ROCm variants use torch.cuda.is_available() to check GPU availability. For ROCm, this should use ROCm-specific detection methods or at minimum clarify in the output that cuda.is_available() also returns true for ROCm devices.
| python -c "import torch; print(f'PyTorch ROCm Available: {torch.cuda.is_available()}')" 2>/dev/null || true | |
| python -c "import torch; print(f'PyTorch CUDA interface available (used for both CUDA and ROCm): {torch.cuda.is_available()}'); print(f'PyTorch device name: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"N/A\"}')" 2>/dev/null || true |
π Description
Brief description of the changes in this pull request.
π Type of Change
π― Related Issues
Fixes #(issue number)
Relates to #(issue number)
π Module/Area Affected
π§ͺ Testing
Describe how you tested your changes:
β Completed Testing
π₯οΈ Testing Environment
π Test Commands
# Commands used to test the changes python module_x/example.py --help python module_x/example.py --parameter valueπΈ Screenshots (if applicable)
Include screenshots for visual changes, new visualizations, or UI improvements.
π Documentation Changes
β‘ Performance Impact
π Breaking Changes
If this introduces breaking changes, please describe:
π Checklist
Code Quality
Testing
Documentation
Educational Value
π Educational Impact
How do these changes improve the learning experience?
π€ Review Notes
Anything specific you'd like reviewers to focus on?
π Future Work
Any follow-up work that should be done in future PRs?
Thank you for contributing to Quantum Computing 101! πβοΈ