-
Notifications
You must be signed in to change notification settings - Fork 0
chore: improve release for single arch #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe changes introduce multi-architecture Docker-based build support for .deb packages targeting AMD64 and ARM64. Documentation updates describe architecture detection logic and conditional setup flows: native architecture builds skip QEMU and registry overhead, whilst cross-platform builds utilise QEMU and local registries. The release.py module expands with new utility functions for system architecture detection, QEMU/registry/Buildx environment setup, and secrets parsing. The deb-prepare flow is refactored to use centralised Buildx image building that conditionally outputs to local loading or registry based on architecture requirements, replacing previous manual iteration patterns. Pre-merge checksImportant Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 error, 1 warning)
✅ Passed checks (2 passed)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (4)
docs/releases.md (1)
59-59: Rephrase bullet 2 for smoother wording and grammarThe sentence here reads a bit clunky and repeats “We want to…” and “We use docker”. Consider tightening it and fixing the “use docker” agreement, for example:
-2. We want to build `.deb`s for **AMD64** and/or **ARM64** architectures and possibly multiple versions of ROS, with automatic optimization for native builds. We use docker to build the `.deb` files and mount them back to the host machine so they can be linked into a single release. +2. We build `.deb` packages for **AMD64** and/or **ARM64** architectures and possibly multiple ROS versions, with automatic optimisation for native builds. Docker builds the `.deb` files and mounts them back to the host machine so they can be linked into a single release.platform_cli/groups/release.py (3)
66-95: Host architecture detection and QEMU decision logic are sensible, but the unknown‑arch fallback is riskyThe mapping from
platform.machine()to theArchitectureenum and usingshould_build_with_qemu()to gate QEMU/registry setup is a neat way to centralise the native vs cross‑arch decision.The one concern is the fallback:
else: # Default to amd64 for unknown architectures return Architecture.amd64On an unsupported host (e.g.
armv7lor another non‑x86_64/aarch64 variant) this will silently treat the system as amd64, which could mis‑classify a build as “native” and skip QEMU when it is actually required.I’d recommend failing fast or at least making the behaviour explicit, for example:
- else: - # Default to amd64 for unknown architectures - return Architecture.amd64 + else: + raise RuntimeError(f"Unsupported host architecture: {machine}")or using a sentinel that always forces
should_build_with_qemu()to returnTrue.
413-460: QEMU/registry/Buildx setup swallows all exceptions and may hide genuine misconfigurationThe new
_setup_qemu,_setup_local_registry, and_setup_buildx_environmenthelpers are a nice separation of concerns, but all three catchExceptionbroadly and then assume the failure is the “already running / already exists” case:except Exception as e: echo(f"QEMU already running: {e}", "yellow") passand:
except Exception as e: echo(f"Local registry already running: {e}", "yellow")except Exception: echo("docker buildx environment already exists", "yellow")This can mask real configuration problems (e.g. Docker not running, Buildx not installed, unsupported flags) and make debugging harder, which is what Ruff is flagging.
Consider narrowing these to the specific docker/python‑on‑whales exceptions you expect for “already exists” scenarios, or re‑raising unexpected errors after logging, e.g.:
except SomeDockerError as e: if "already in use" in str(e): echo("…already running…", "yellow") else: raise
532-563: Use bareraiseto preserve original exception context in_build_all_architecture_debsThe loop around
_build_deb_in_dockeris sound, but this pattern:except Exception as e: echo( f"Failed to build .deb for {architecture}", "red", level=LogLevels.ERROR, ) raise ewill re‑raise a new exception object and lose part of the original traceback context. Using bare
raiseis more idiomatic and keeps the original stack:- except Exception as e: + except Exception: echo( f"Failed to build .deb for {architecture}", "red", level=LogLevels.ERROR, ) - raise e + raiseThis also addresses Ruff’s TRY201 hint.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
docs/releases.md(2 hunks)platform_cli/groups/release.py(8 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
**/*.py: - Use Python 3.8+ features (f-strings, walrus operator, positional-only parameters)
- Use dependency injection patterns - inject dependencies through constructors rather than creating them internally
- Inject configuration and external services (databases, APIs, ROS nodes) rather than hardcoding them
- Design classes with single responsibility principle - each class should have one clear purpose
- Include comprehensive type annotations using typing module (List, Dict, Optional, Union, etc.)
- Prefer built-in functions like
next(),any(),all(),enumerate()over manual loops- Follow ROS2 best practices: use lifecycle nodes, quality of service (QoS) profiles, and parameter declarations
- Design functions and classes to be easily unit testable - avoid tight coupling to ROS nodes
- Use dependency injection patterns to make ROS dependencies mockable in tests
- Implement proper error handling with specific exception types, avoid bare
except:- Use dataclasses or Pydantic models for structured data instead of dictionaries
- Leverage ROS2 latest features: composition, lifecycle management, parameter callbacks
- Add unit tests for complex business logic using pytest, mock ROS dependencies
- Use context managers (
withstatements) for resource management- Prefer list/dict comprehensions and generator expressions over loops where readable
- Follow PEP 8 naming conventions and use meaningful variable names
- Use logging instead of print statements, configure ROS2 logging levels appropriately
- Implement proper cleanup in node destructors and signal handlers
- Use async/await for non-blocking operations where appropriate
- Structure code with clear separation between ROS communication and business logic
- Use piknik's generate_parameter_library for declarative parameters which make unit testing logic that uses parameters easier
- Ensure changes to ROS params, services, publishers and subscribers are documented in the design.md file
Files:
platform_cli/groups/release.py
🪛 LanguageTool
docs/releases.md
[style] ~59-~59: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...com/qiwi/multi-semantic-release). 2. We want to build .debs for AMD64 and/or **AR...
(REP_WANT_TO_VB)
[style] ~59-~59: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...tomatic optimization for native builds. We use docker to build the .deb files an...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[grammar] ~59-~59: There is an agreement error between ‘use’ and ‘docker’. Insert ‘a(n)’ or change the noun to plural.
Context: ...atic optimization for native builds. We use docker to build the .deb files and mount the...
(PRP_VB_NN)
🪛 Ruff (0.14.7)
platform_cli/groups/release.py
423-423: Do not catch blind exception: Exception
(BLE001)
439-439: Do not catch blind exception: Exception
(BLE001)
452-452: Do not catch blind exception: Exception
(BLE001)
487-487: Create your own exception
(TRY002)
487-487: Avoid specifying long messages outside the exception class
(TRY003)
561-561: Use raise without specifying exception name
Remove exception name
(TRY201)
🔍 Remote MCP Context7
Perfect! I now have sufficient information to provide relevant context for reviewing this PR. Let me compile the summary.
Summary for PR #13 Review
Based on my search of Docker Buildx documentation and multi-platform build patterns, here are the key technical facts relevant to this PR's approach:
Docker Buildx Build Strategies
The PR implements two distinct architecture build strategies that align with Docker's official recommendations:
-
Native Architecture Builds: When targeting the native system architecture, the implementation uses
--output=type=docker(local loading) to skip QEMU emulation and local registry setup, which aligns with the Docker Buildx optimization pattern of avoiding unnecessary overhead when building for the matching architecture. -
Cross-Platform Builds: When building for multiple architectures or non-native single architecture, the implementation uses registry-based output (
--push), which is the standard pattern for multi-platform Buildx builds. This requires QEMU setup and a local registry to coordinate multi-architecture builds.
Key Technical Considerations
-
Output Types: Docker Buildx supports different output types:
type=docker(local daemon loading) for single-architecture native builds, andtype=image,push=truefor registry-based multi-architecture builds. The PR correctly distinguishes between these scenarios. -
Buildx Initialization: The PR's
_setup_buildx_environment()method aligns with Docker's requirement to create and configure a Buildx builder instance before executing builds, particularly when using QEMU or registry outputs. -
QEMU and Registry: The conditional setup of QEMU (
_setup_qemu()) and local registry (_setup_local_registry()) follows Docker's multi-platform build pattern documented in their CI examples, where these components are only needed for cross-platform scenarios. -
Secrets Handling: The
_parse_secrets_for_buildx()method correctly addresses the requirement to parse secrets into Buildx-compatible format, which is a documented feature of Buildx for handling sensitive build-time variables.
Code Review Implications
The refactoring properly separates concerns between native and cross-platform builds, reducing complexity for the common single-architecture case while maintaining full cross-platform capabilities. The Manifest type usage allows the code to track image metadata differences between native (None) and registry-based (digest) builds.
🔇 Additional comments (3)
docs/releases.md (1)
61-79: Architecture strategies section clearly explains the native vs cross‑platform behaviourThe “Architecture Build Strategies” section gives a clear, accurate explanation of when QEMU and the local registry are used versus the native build path, and matches the intent of the new
deb-prepareflow. No changes needed from my side here.platform_cli/groups/release.py (2)
461-472: Secrets parsing helper is simple and defensive enoughThe
_parse_secrets_for_buildxhelper cleanly handles the “no secrets” ("{}") case, validates JSON with a try/except, and formats entries asid=…,src=…for Buildx. The warning on invalid JSON is also helpful for CI logs.No changes needed here from my side.
805-838: Deb‑prepare flow refactor nicely separates native vs cross‑arch pathsThe new
deb_prepareflow that:
- Computes
needs_qemu = should_build_with_qemu(arch),- Only sets up QEMU and the local registry when
needs_qemuisTrue,- Always initialises the Buildx environment and uses
_build_docker_image_with_buildx, and- Delegates
.debcreation across architectures via_build_all_architecture_debs,gives a clear separation between the single‑arch native case and the multi‑arch / cross‑arch case while keeping a single Buildx codepath. This should make the common “native single‑arch” scenario noticeably cheaper without losing multi‑arch support.
| 1. **Architecture Detection**: Determines if cross-platform emulation is needed | ||
| 2. **Conditional Setup**: | ||
| - For cross-platform builds: Sets up QEMU (`tonistiigi/binfmt`) and local registry on [localhost:5000](http://localhost:5000) | ||
| - For native builds: Skips QEMU and registry for better performance | ||
| 3. **Buildx Build**: Uses `buildx` for all builds (enables secrets support) | ||
| - Cross-platform: Builds for multiple architectures and pushes to local registry | ||
| - Native: Builds for current architecture only and loads locally | ||
| 4. Executes [`platform pkg build`](../platform_cli/groups/packaging.py#:~:text=build) inside each docker container to build the `.deb` with a docker volume to mount the resultant `.deb` back to the host machine. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align QEMU image reference with the actual implementation
The docs say QEMU is set up using tonistiigi/binfmt, but the current implementation in platform_cli/groups/release.py::_setup_qemu still runs the multiarch/qemu-user-static image. It would be good to either:
- Update this line to describe QEMU setup more generically (e.g. “via a QEMU binfmt container”), or
- Switch the implementation to
tonistiigi/binfmtso the docs and code match.
| def _get_docker_image_name(self, platform_module_name: str, use_registry: bool = False) -> str: | ||
| """Returns the docker image name for a package""" | ||
| # Note, uppercase is not allowed in docker image names | ||
| return f"{DOCKER_REGISTRY}/{platform_module_name.lower()}:latest" | ||
| image_name = f"{platform_module_name.lower()}:latest" | ||
| if use_registry: | ||
| return f"{DOCKER_REGISTRY}/{image_name}" | ||
| else: | ||
| return image_name | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and examine the file
fd -t f "release.py" | grep -i groups
# Get file size to ensure we can read it
wc -l platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 140
🏁 Script executed:
# Let's look at the specific lines mentioned (309-317 context and 339-355)
sed -n '305,360p' platform_cli/groups/release.py | cat -nRepository: Greenroom-Robotics/platform_cli
Length of output: 2853
🏁 Script executed:
# Search for the problematic next() pattern mentioned in the review
rg "next\(" platform_cli/groups/release.py -A 5 -B 2Repository: Greenroom-Robotics/platform_cli
Length of output: 674
🏁 Script executed:
# Search for the spelling error "archicture"
rg "archicture|architecture" platform_cli/groups/release.py -C 2Repository: Greenroom-Robotics/platform_cli
Length of output: 4186
🏁 Script executed:
# Check lines 339-355 mentioned in "Also applies to"
sed -n '335,360p' platform_cli/groups/release.py | cat -nRepository: Greenroom-Robotics/platform_cli
Length of output: 1385
Replace bare next() with explicit None default and raise a clear error for missing architecture manifest
Using _get_docker_image_name_with_digest to resolve an arch-specific digest from the multi-arch manifest is sound, and the "native build" shortcut when image_manifests is None makes sense.
However, this code:
image_for_docker_platform = next(
manifest
for manifest in image_manifests.manifests
if manifest.platform and manifest.platform.architecture == architecture.value
)will raise a bare StopIteration if there is no manifest entry for the requested architecture (e.g. if Buildx was run with a different platform set, or if a platform was dropped). That error will be opaque to users.
Rewrite as:
image_for_docker_platform = next(
(
manifest
for manifest in image_manifests.manifests
if manifest.platform
and manifest.platform.architecture == architecture.value
),
None,
)
if image_for_docker_platform is None:
raise RuntimeError(
f"No manifest found for architecture {architecture.value} in image {image_name}"
)Also fix the spelling error in the comment above this code: "archicture" → "architecture".
🤖 Prompt for AI Agents
In platform_cli/groups/release.py around lines 309 to 317, replace the bare
next(...) used to select the architecture-specific manifest with a next(...,
None) and explicitly raise a clear RuntimeError if the result is None (e.g. "No
manifest found for architecture {architecture.value} in image {image_name}"),
and correct the comment typo from "archicture" to "architecture".
| def _build_docker_image_with_buildx( | ||
| self, | ||
| package_info: PackageInfo, | ||
| docker_image_name: str, | ||
| buildx_secrets: List[str], | ||
| package_dir: str, | ||
| ros_distro: str, | ||
| package: str, | ||
| docker_platforms: Optional[List[str]] = None, | ||
| use_registry: bool = False, | ||
| ) -> Optional[Manifest]: | ||
| """Build docker image using buildx with unified logic for both native and multiplatform builds""" | ||
| if not package_info.module_info: | ||
| raise Exception("Module info is required to build docker images") | ||
|
|
||
| # Dynamic echo message based on build type | ||
| if docker_platforms: | ||
| echo("Building docker container with buildx...", group_start=True) | ||
| else: | ||
| echo( | ||
| f"Building docker container for native architecture ({get_current_system_architecture().value}) with buildx...", | ||
| group_start=True, | ||
| ) | ||
|
|
||
| # Prepare build arguments | ||
| build_kwargs = { | ||
| "context": package_info.module_info.platform_module_path, | ||
| "tags": [docker_image_name], | ||
| "secrets": buildx_secrets, | ||
| "build_args": { | ||
| "API_TOKEN_GITHUB": os.environ["API_TOKEN_GITHUB"], | ||
| "GPU": os.environ["GPU"], | ||
| "PLATFORM_MODULE": package_info.module_info.platform_module_name, | ||
| "PACKAGE_DIR": package_dir, | ||
| "ROS_DISTRO": ros_distro, | ||
| "PACKAGE_NAME": package, | ||
| }, | ||
| } | ||
|
|
||
| # Add platform-specific options | ||
| if docker_platforms: | ||
| build_kwargs["platforms"] = docker_platforms | ||
|
|
||
| if use_registry: | ||
| build_kwargs["output"] = {"type": "registry"} | ||
| else: | ||
| build_kwargs["load"] = True | ||
|
|
||
| # Build the image | ||
| docker.buildx.build(**build_kwargs) | ||
| echo(group_end=True) | ||
|
|
||
| # Return manifest for registry builds, None for local builds | ||
| if use_registry: | ||
| return docker.buildx.imagetools.inspect(docker_image_name) | ||
| else: | ||
| return None | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the file and check its size
find . -name "release.py" -path "*/groups/*" | head -5Repository: Greenroom-Robotics/platform_cli
Length of output: 107
🏁 Script executed:
# Check the file and get relevant sections around the mentioned lines
wc -l ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 111
🏁 Script executed:
# Look at the second location mentioned (792-804)
sed -n '792,804p' ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 534
🏁 Script executed:
# Search for deb_prepare function to verify the claim about API_TOKEN_GITHUB check
rg "def deb_prepare" ./platform_cli/groups/release.py -A 20Repository: Greenroom-Robotics/platform_cli
Length of output: 1048
🏁 Script executed:
# Search for API_TOKEN_GITHUB validation/check in the file
rg "API_TOKEN_GITHUB" ./platform_cli/groups/release.py -B 2 -A 2Repository: Greenroom-Robotics/platform_cli
Length of output: 931
🏁 Script executed:
# Search for GPU environment variable usage across the file
rg "GPU" ./platform_cli/groups/release.py -B 2 -A 2Repository: Greenroom-Robotics/platform_cli
Length of output: 338
🏁 Script executed:
# Let's look at the context around the _build_docker_image_with_buildx function more carefully
sed -n '474,540p' ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 2456
🏁 Script executed:
# Look for where GPU might be set or configured
rg "GPU\s*=" ./platform_cli/groups/release.py -B 2 -A 2Repository: Greenroom-Robotics/platform_cli
Length of output: 57
🏁 Script executed:
# Search for os.environ usage patterns to see if GPU is handled elsewhere
rg "os\.environ\[.*GPU" ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 116
🏁 Script executed:
# Check if GPU is accessed using .get() with a default anywhere
rg "os\.environ\.get.*GPU|GPU.*os\.environ\.get" ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 57
🏁 Script executed:
# Get more context around line 792-804
sed -n '780,810p' ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 1474
🏁 Script executed:
# Search for any validation or checks related to environment variables at the beginning of functions or module
head -100 ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 2766
🏁 Script executed:
# Look for deb_prepare more thoroughly
rg "deb_prepare" ./platform_cli/groups/release.py -B 5 -A 30Repository: Greenroom-Robotics/platform_cli
Length of output: 1637
🏁 Script executed:
# Search for any checks of API_TOKEN_GITHUB in the entire codebase
rg "API_TOKEN_GITHUB" . --include="*.py" -B 3 -A 3 | head -60Repository: Greenroom-Robotics/platform_cli
Length of output: 516
🏁 Script executed:
# Look at the imports and early part of the file to understand context
head -50 ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 1198
🏁 Script executed:
# Search more specifically for where API_TOKEN_GITHUB is checked or validated
rg "if.*API_TOKEN_GITHUB|API_TOKEN_GITHUB.*if|API_TOKEN_GITHUB.*in.*environ" . --include="*.py" -B 2 -A 2Repository: Greenroom-Robotics/platform_cli
Length of output: 516
🏁 Script executed:
# Find method definitions around line 474
rg "def _build_docker_image_with_buildx|def.*792" ./platform_cli/groups/release.py -B 2 -A 50 | head -100Repository: Greenroom-Robotics/platform_cli
Length of output: 1995
🏁 Script executed:
# Get the exact lines around both locations to understand the context
echo "=== Lines 470-535 ===" && sed -n '470,535p' ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 2414
🏁 Script executed:
# Get lines around 792-804
echo "=== Lines 785-815 ===" && sed -n '785,815p' ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 1450
🏁 Script executed:
# Search for any documentation or comments mentioning GPU or API_TOKEN requirements
rg "GPU|API_TOKEN_GITHUB" ./platform_cli/groups/release.py -B 5 | grep -E "#|GPU|API_TOKEN" | head -40Repository: Greenroom-Robotics/platform_cli
Length of output: 553
🏁 Script executed:
# Look at the class definition and any __init__ methods
rg "class.*Release|def __init__" ./platform_cli/groups/release.py -A 15 | head -80Repository: Greenroom-Robotics/platform_cli
Length of output: 958
🏁 Script executed:
# Look for environment variable setup or checks in other files
find . -name "*.py" -type f | xargs grep -l "GPU\|API_TOKEN_GITHUB" | head -20Repository: Greenroom-Robotics/platform_cli
Length of output: 142
🏁 Script executed:
# Check if there's validation in the entry point or main
rg "def main\|if __name__" ./platform_cli/groups/release.py -A 20Repository: Greenroom-Robotics/platform_cli
Length of output: 57
🏁 Script executed:
# Find the main entry point
find . -name "*.py" -path "*platform_cli*" | grep -E "main|__main__|cli" | head -10Repository: Greenroom-Robotics/platform_cli
Length of output: 364
🏁 Script executed:
# Check the CLI entry point
rg "API_TOKEN_GITHUB" . --include="*.py" -lRepository: Greenroom-Robotics/platform_cli
Length of output: 516
🏁 Script executed:
# Let's look at one of those files
rg "API_TOKEN_GITHUB" . --include="*.py" -B 5 -A 5 | head -100Repository: Greenroom-Robotics/platform_cli
Length of output: 516
🏁 Script executed:
# Search for any validation functions or checks
rg "require|check.*environ|validate.*env" . --include="*.py" -i | head -30Repository: Greenroom-Robotics/platform_cli
Length of output: 516
🏁 Script executed:
# Extract the exact code sections in question
python3 << 'EOF'
with open('./platform_cli/groups/release.py', 'r') as f:
lines = f.readlines()
print("=== Lines 474-531 (Method 1) ===")
for i, line in enumerate(lines[473:531], start=474):
print(f"{i:4d}: {line}", end='')
print("\n=== Lines 792-804 (Method 2) ===")
for i, line in enumerate(lines[791:804], start=792):
print(f"{i:4d}: {line}", end='')
EOFRepository: Greenroom-Robotics/platform_cli
Length of output: 161
🏁 Script executed:
# Search for deb_prepare and see what it contains
python3 << 'EOF'
with open('./platform_cli/groups/release.py', 'r') as f:
content = f.read()
if 'deb_prepare' in content:
lines = content.split('\n')
for i, line in enumerate(lines):
if 'deb_prepare' in line:
print(f"Found 'deb_prepare' at line {i+1}")
# Print context
start = max(0, i - 2)
end = min(len(lines), i + 30)
for j in range(start, end):
print(f"{j+1:4d}: {lines[j]}")
print()
else:
print("'deb_prepare' not found in file")
EOFRepository: Greenroom-Robotics/platform_cli
Length of output: 161
🏁 Script executed:
# Search for deb_prepare across all Python files
rg "deb_prepare" . --include="*.py" -B 2 -A 20 | head -150Repository: Greenroom-Robotics/platform_cli
Length of output: 516
🏁 Script executed:
# Find all methods that reference GPU or buildx
rg "def.*buildx|def.*docker.*build" ./platform_cli/groups/release.py -A 30 | head -150Repository: Greenroom-Robotics/platform_cli
Length of output: 2563
🏁 Script executed:
# Get method names and line numbers
rg "^\s*def " ./platform_cli/groups/release.py -n | tail -30Repository: Greenroom-Robotics/platform_cli
Length of output: 2152
🏁 Script executed:
# Check what's on line 792 area
python3 << 'EOF'
with open('./platform_cli/groups/release.py', 'r') as f:
lines = f.readlines()
# Find what method contains line 792
for i in range(792-1, -1, -1):
if 'def ' in lines[i]:
print(f"Line {i+1}: {lines[i].strip()}")
break
# Show the actual lines around 792
print("\n=== Context around line 792 ===")
for idx in range(max(0, 792-10-1), min(len(lines), 792+10)):
print(f"{idx+1:4d}: {lines[idx]}", end='')
EOFRepository: Greenroom-Robotics/platform_cli
Length of output: 161
🏁 Script executed:
# Get exact content of the two locations mentioned
python3 << 'EOF'
with open('./platform_cli/groups/release.py', 'r') as f:
lines = f.readlines()
# Location 1: 474-531
print("=== LOCATION 1: Lines 474-531 ===")
for i in range(473, 531):
if i < len(lines):
print(f"{i+1:4d}: {lines[i]}", end='')
print("\n\n=== LOCATION 2: Lines 792-804 ===")
for i in range(791, 804):
if i < len(lines):
print(f"{i+1:4d}: {lines[i]}", end='')
EOFRepository: Greenroom-Robotics/platform_cli
Length of output: 161
🏁 Script executed:
# Search for where GPU env var might be set or validated
rg "\[.GPU.\]|\bGPU\b" . --include="*.py" -B 3 -A 3 | grep -v ".pyc" | head -100Repository: Greenroom-Robotics/platform_cli
Length of output: 516
🏁 Script executed:
# Check git history if available to see recent changes to this file
git log --oneline ./platform_cli/groups/release.py 2>/dev/null | head -10 || echo "Git history not available"Repository: Greenroom-Robotics/platform_cli
Length of output: 105
🏁 Script executed:
# Extract exact code sections
python3 << 'PYEOF'
import re
with open('./platform_cli/groups/release.py', 'r') as f:
lines = f.readlines()
print("CHECKING LOCATION 1: Lines 474-531")
print("=" * 60)
section1 = ''.join(lines[473:531])
if 'os.environ["GPU"]' in section1:
print("✓ FOUND: os.environ[\"GPU\"] (bare bracket access - no default)")
if 'os.environ.get("GPU"' in section1:
print("✓ FOUND: os.environ.get(\"GPU\" (with .get method - may have default)")
if 'API_TOKEN_GITHUB' in section1:
print("✓ FOUND: API_TOKEN_GITHUB reference")
# Show the build_args section
for i, line in enumerate(lines[473:531], start=474):
if 'build_args' in line or 'GPU' in line or 'API_TOKEN' in line:
print(f"{i}: {line.rstrip()}")
print("\n" + "=" * 60)
print("CHECKING LOCATION 2: Lines 792-804")
print("=" * 60)
section2 = ''.join(lines[791:804])
if 'os.environ["GPU"]' in section2:
print("✓ FOUND: os.environ[\"GPU\"] (bare bracket access)")
if 'os.environ.get("GPU"' in section2:
print("✓ FOUND: os.environ.get(\"GPU\" (with .get method)")
if 'API_TOKEN_GITHUB' in section2:
print("✓ FOUND: API_TOKEN_GITHUB reference")
for i, line in enumerate(lines[791:804], start=792):
if 'build_args' in line or 'GPU' in line or 'API_TOKEN' in line:
print(f"{i}: {line.rstrip()}")
PYEOFRepository: Greenroom-Robotics/platform_cli
Length of output: 161
🏁 Script executed:
# Search for API_TOKEN_GITHUB validation
rg 'API_TOKEN_GITHUB|environ.*API_TOKEN' ./platform_cli/groups/release.py -B 5 -A 5Repository: Greenroom-Robotics/platform_cli
Length of output: 1970
🏁 Script executed:
# Get all method definitions with line numbers
rg '^\s{4}def ' ./platform_cli/groups/release.py -nRepository: Greenroom-Robotics/platform_cli
Length of output: 1228
🏁 Script executed:
# Final verification - get the exact problematic code
cat -n ./platform_cli/groups/release.py | sed -n '500,520p'Repository: Greenroom-Robotics/platform_cli
Length of output: 1008
🏁 Script executed:
# Check lines 792-804 precisely
cat -n ./platform_cli/groups/release.py | sed -n '792,804p'Repository: Greenroom-Robotics/platform_cli
Length of output: 625
🏁 Script executed:
# Final search for any checks or validations
rg 'if.*API_TOKEN|check.*API|require.*API|API.*KeyError' . --include="*.py" -iRepository: Greenroom-Robotics/platform_cli
Length of output: 516
🏁 Script executed:
# List files to ensure we have the right repo structure
ls -la ./platform_cli/groups/ 2>/dev/null || find . -name "release.py" -type fRepository: Greenroom-Robotics/platform_cli
Length of output: 599
🏁 Script executed:
# Try to read the file directly using a simple cat command
test -f ./platform_cli/groups/release.py && echo "File exists" || echo "File not found"
find . -type f -name "release.py" 2>/dev/nullRepository: Greenroom-Robotics/platform_cli
Length of output: 119
🏁 Script executed:
# List all Python files in platform_cli
find ./platform_cli -type f -name "*.py" 2>/dev/null | head -20Repository: Greenroom-Robotics/platform_cli
Length of output: 364
🏁 Script executed:
# Get file listing from repo root
ls -la . | head -20Repository: Greenroom-Robotics/platform_cli
Length of output: 1268
🏁 Script executed:
# Check git status and repo structure
git ls-files | grep -E "release.py|platform_cli" | head -20Repository: Greenroom-Robotics/platform_cli
Length of output: 511
🏁 Script executed:
# Try to get file via git
git show HEAD:platform_cli/groups/release.py 2>/dev/null | head -100Repository: Greenroom-Robotics/platform_cli
Length of output: 2766
🏁 Script executed:
# Search for all GPU environment variable accesses
grep -n 'os.environ\["GPU"\]' ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 120
🏁 Script executed:
# Look at what's around line 792-804 and check for GPU usage
sed -n '785,850p' ./platform_cli/groups/release.py | grep -n "GPU\|API_TOKEN"Repository: Greenroom-Robotics/platform_cli
Length of output: 195
🏁 Script executed:
# Look at deb_prepare function to see what it does
sed -n '785,830p' ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 1965
🏁 Script executed:
# Search for where _build_docker_image_with_buildx is called from
grep -n "_build_docker_image_with_buildx" ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 191
🏁 Script executed:
# Get the full deb_prepare function
sed -n '785,846p' ./platform_cli/groups/release.pyRepository: Greenroom-Robotics/platform_cli
Length of output: 2514
Guard GPU environment variable usage to avoid surprising KeyErrors
_build_docker_image_with_buildx accesses both API_TOKEN_GITHUB and GPU from the environment without defensive checks:
"build_args": {
"API_TOKEN_GITHUB": os.environ["API_TOKEN_GITHUB"],
"GPU": os.environ["GPU"],
...
},While deb_prepare validates that API_TOKEN_GITHUB is set, it does not check GPU. If GPU is unset in a local build, this will fail with a bare KeyError at build time.
Suggested remedies:
- Use
os.environ.get("GPU", "")to provide a sensible default, or - Add an explicit check in
deb_preparevalidating thatGPUis set, similar to theAPI_TOKEN_GITHUBcheck
🧰 Tools
🪛 Ruff (0.14.7)
487-487: Create your own exception
(TRY002)
487-487: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In platform_cli/groups/release.py around lines 474 to 531, the code reads
os.environ["GPU"] directly which can raise a KeyError if GPU is not set; update
the build arg to use a safe default (e.g. os.environ.get("GPU", "")) or add an
explicit validation earlier (like deb_prepare does for API_TOKEN_GITHUB) to
ensure GPU is present; implement the chosen approach so build_args uses a
non-throwing value and add a brief comment indicating why the
fallback/validation is necessary.
Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.