Skip to content

Conversation

@jayceslesar
Copy link
Contributor

Closes #1200

Rationale for this change

Ability to do more table maintenance from pyiceberg (iceberg-python?)

Are these changes tested?

Added a test!

Are there any user-facing changes?

Yes, this is a new method on the Table class.

Copy link
Contributor

@Fokko Fokko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this @jayceslesar, sorry for the late review.

I think this is a great start, I left some comments, let me know what you think!

Copy link
Contributor

@smaheshwar-pltr smaheshwar-pltr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @jayceslesar, using InpsectTable to get orphaned files to submit to the executor pool is a nice idea! Just some concerns / suggestions / debugging help 😄

Copy link
Contributor

@kevinjqliu kevinjqliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! I added a few comments. ptal :)

@kevinjqliu
Copy link
Contributor

a meta question, wydt of moving the orphan file function to its own file/namespace, similar to how to use .inspect.

i like the idea of having all the table maintenance functions together, similar to delta table's optimize

@jayceslesar
Copy link
Contributor Author

jayceslesar commented May 4, 2025

a meta question, wydt of moving the orphan file function to its own file/namespace, similar to how to use .inspect.

i like the idea of having all the table maintenance functions together, similar to delta table's optimize

I think that makes sense -- would #1880 end up there too?

Also ideally there is a CLI that exposes all the maintenance actions too right?

I think moving things to a new OptimizeTable class in a new namespace optimize.py makes a lot of sense, can be modeled very similar to the InspectTable and generally makes things cleaner -- I think it still makes sense to have the all_known_files inside of inspect though, and can still use that in the new OptimizeTable

@Fokko
Copy link
Contributor

Fokko commented May 13, 2025

i like the idea of having all the table maintenance functions together, similar to delta table's optimize

That's a good point. However, I think we should be able to either run them separate as well. For example, delete orphan files won't affect the speed of the table, so it is more of a maintenance feature to reduce object storage costs. Delete orphan files can also be pretty costly because of the list operation, ideally you would delegate this to the catalog that uses, for example, s3 inventory.

Copy link
Contributor

@Anton-Tarazi Anton-Tarazi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, left some minor comments. Looking forward to this feature :)

executor = ExecutorFactory.get_or_create()
snapshot_ids = [snapshot.snapshot_id for snapshot in snapshots]
files_by_snapshots: Iterator[Set[str]] = executor.map(
lambda snapshot_id: set(self.files(snapshot_id)["file_path"].to_pylist()), snapshot_ids
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be nice if InspectTable.files or InspectTable._files took an Optional[Union[int, Snapshot]] so we didn't have to get the id from a snapshot and then turn it back into a Snapshot inside InspectTable._files

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think there are a lot of places where we arbitrarily use one over the other and imo would be nice to standardize. Probably out of scope for this PR but I think would definitely clean things up

as_of = datetime.now(timezone.utc) - older_than
all_files = [
f.path for f in fs.get_file_info(selector) if f.type == FileType.File and (as_of is None or (f.mtime < as_of))
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when would as_of be None? Also can we construct a set directly here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, cleaner now

except ModuleNotFoundError as e:
raise ModuleNotFoundError("For metadata operations PyArrow needs to be installed") from e

def _orphaned_files(self, location: str, older_than: timedelta = timedelta(days=3)) -> Set[str]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could we get rid of the default here since its in remove_orphan_files? could also make this default to None and update handling of as_of below to support None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be implemented

@jayceslesar
Copy link
Contributor Author

@Fokko we probably also want pyiceberg to have some idea about https://iceberg.apache.org/spec/#delete-formats right? Is it currently aware of those files?

@Fokko
Copy link
Contributor

Fokko commented Jun 24, 2025

@jayceslesar I believe the merge-on-read delete files (positional deletes, equality deletes, and deletion vectors) are returned by the all-files. The only part that's missing is the partition statistics files.

@jayceslesar
Copy link
Contributor Author

@jayceslesar I believe the merge-on-read delete files (positional deletes, equality deletes, and deletion vectors) are returned by the all-files. The only part that's missing is the partition statistics files.

Sounds good, I will add the partition statistics files when that is merged!

@aammar5
Copy link

aammar5 commented Jul 10, 2025

Once issue I've found with this PR is that the catalog properties need to propagate to PyArrowFileIO(properties=...) otherwise endpoint/authentication/etc to things like s3 simply fail ...

flat_known_files: set[str] = reduce(set.union, all_known_files.values(), set())

scheme, _, _ = PyArrowFileIO.parse_location(location)
pyarrow_io = PyArrowFileIO()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pyarrow_io = PyArrowFileIO()
pyarrow_io = PyArrowFileIO(properties=self.tbl.catalog.properties)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Id like to see if I can achieve this without pyarrow and will attempt to do so after working in #2146

if older_than is None:
older_than = timedelta(0)
as_of = datetime.now(timezone.utc) - older_than
all_files = [f.path for f in fs.get_file_info(selector) if f.type == FileType.File and f.mtime < as_of]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
all_files = [f.path for f in fs.get_file_info(selector) if f.type == FileType.File and f.mtime < as_of]
all_files = [f"{scheme}://{f.path}" for f in fs.get_file_info(selector) if f.type == FileType.File and f.mtime < as_of]

Comment on lines +68 to +73
try:
import pyarrow as pa # noqa: F401
except ModuleNotFoundError as e:
raise ModuleNotFoundError(
"For deleting orphaned files with a PyArrowFileIO, PyArrow needs to be installed"
) from e
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will this error ever happen? If the table's io is a PyArrowFileIo I think we've already verified that PyArrow is installed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We dont ask if its pyarrowfilio we ask if it isnt fsspecfilio

@jayceslesar
Copy link
Contributor Author

Going to get around adding tests for both types of FileIO... @Fokko @kevinjqliu anything else you think we need here?

@ForeverAngry
Copy link
Contributor

@jayceslesar how's this coming? Let me know if i can help with anything. Id like to use this in prod as well!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Delete orphan files

7 participants