Skip to content

Comments

Fix monolithic push memory issues#2219

Open
gerrod3 wants to merge 1 commit intopulp:mainfrom
gerrod3:more-chunks
Open

Fix monolithic push memory issues#2219
gerrod3 wants to merge 1 commit intopulp:mainfrom
gerrod3:more-chunks

Conversation

@gerrod3
Copy link
Contributor

@gerrod3 gerrod3 commented Feb 23, 2026

According to the spec there are two ways to perform monolithic blob pushes:

  1. On the initial POST request with the digest query parameter
  2. Empty (normal) body POST, then all in the PUT

There's an unofficial third way that podman can apparently do according to our comments which is:

  1. Normal empty POST
  2. Entire chunk in one PATCH with no Range header
  3. Empty PUT

The code should now call our special large chunk handler for all three cases, we were forgetting case number 2. Note that depending on the client they could still cause the server to go OOM if they send ridiculously large chunks in the normal chunked upload path since we read the entire chunk into memory before saving it. There is nothing we can do here as part of the code lives in pulpcore, but honestly what is that client thinking sending such large chunks!

📜 Checklist

  • Commits are cleanly separated with meaningful messages (simple features and bug fixes should be squashed to one commit)
  • A changelog entry or entries has been added for any significant changes
  • Follows the Pulp policy on AI Usage
  • (For new features) - User documentation and test coverage has been added

See: Pull Request Walkthrough

@rochacbruno
Copy link
Member

@gerrod3 your changes affects the put method, what about the partial_update (patch)?

Also, can we have backport to 2.19


# if more chunks
if range_header:
chunk = ContentFile(chunk.read())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this getting a larger chunk?

@gerrod3
Copy link
Contributor Author

gerrod3 commented Feb 23, 2026

@rochacbruno Read my initial comment on the different scenarios of chunked upload. If a PATCH chunk upload is sent that is larger than we can handle and doesn't fall under the special podman case then we are out of luck. Someone needs to fix their client to adhere to the spec or set their chunking size to a smaller value.

@gerrod3
Copy link
Contributor Author

gerrod3 commented Feb 23, 2026

The most we can do is maybe introduce a setting that checks the size of the chunk and if over a certain amount return a 4XX error that the chunk is too large for a PATCH request, use a smaller size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants