-
Notifications
You must be signed in to change notification settings - Fork 166
Implement bootc image copy-to-storage for composefs backend
#1890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
1d344dc to
72d7942
Compare
cgwalters
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks for working on this! Nothing truly blocking, but some cleanups
|
|
||
| let size = header.entry_size()?; | ||
|
|
||
| let item = match reader.read_exact(size as usize, ((size + 511) & !511) as usize)? { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to be relying on size = 0 for anything except regular files. I think we should probably have at least an anyhow::ensure! for that.
| reader: &mut SplitStreamReader<R, ObjectID>, | ||
| ) -> anyhow::Result<Option<(Header, TarItem<ObjectID>)>> { | ||
| let mut buf = [0u8; 512]; | ||
| if !reader.read_inline_exact(&mut buf)? || buf == [0u8; 512] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also isn't there a bigger problem in that iterating by raw headers in this way means we aren't handling e.g. GNU long links and PAX extended headers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did run into the truncation problem when using the parsing code in composefs-rs (hence the copying of part of the code here). The PAX and GNU extended headers are just passed through
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's the one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I think the logic we have here should replace that in get_entry ideally...there's no reason AFAIK for us to synthesize a new tarstream.
Actually though in this case there's the cat function which I think already does what we want in just giving us back the raw byte stream.
So: I think all of this just be replaced by reader.cat(layer_writer, || repository.load_object())
Though note the cat implementation has the same suboptimal "read entire file into memory" thing but that's fixable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the issue lies with GNU long names and PAX headers which need to be parsed and then reconstructed to the full path. That could be inlined, but it's a bit cleaner as a separate struct
cgwalters
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this is failing CI by breaking the ostree-based path here, though it's not at all obvious to me why that is
| let transport = transport.strip_suffix(":").unwrap_or(&transport); | ||
|
|
||
| if transport == "registry" { | ||
| if transport == "registry" || transport == "docker://" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unrelated to this change but it'd be cleaner to parse this back into the Transport enum which already handles this canonicalization
|
The tests seem to be timing out, but I can't find the actual error. Weird. I'll try it locally once |
Export a composefs repository as an OCI image. In this iteration the outputted files are in OCI Directory format and are plain TARs, i.e. not compressed Signed-off-by: Pragyan Poudyal <pragyanpoudyal41999@gmail.com>
Instead of handling the history,metadata,annotations ourselves, delegate them to `ocidir` crate. Also take into account the source and target image references Finally call `skopeo::copy` to copy to containers-storage Signed-off-by: Pragyan Poudyal <pragyanpoudyal41999@gmail.com>
Inline the tar parsing/unpacking Check for two NULL 512 blocks instead of just one Share source image and target image generating code between composefs and ostree Signed-off-by: Pragyan Poudyal <pragyanpoudyal41999@gmail.com>
04aaa2e to
92ec678
Compare
The sysroot lock was being taken by `get_host` before it was released by the caller. Move the `get_host` function up the stack of calls Signed-off-by: Pragyan Poudyal <pragyanpoudyal41999@gmail.com>
92ec678 to
c83639d
Compare
|
BTW not a big problem but something I noticed post-merge: my personal preference is to squash critical fixes to prior commits. "critical" has a sliding scale from "compiles" to "unit tests pass" to "integration tests pass" and beyond that of course to "docs are coherent" etc. In this case the earlier commit didn't pass our integration tests, and I think that argues for squashing. |
|
Ah, my bad, it would've been better to squash. I'll keep this in mind |
Export a composefs repository as an OCI image. This would be helpful for running tests that we have for ostree backend. Tests for sealed UKI would be separate, I think, but tests for Type1 entries can be shared between the two backends