Conversation
Summary of ChangesHello @ChenZiHong-Gavin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the data ingestion capabilities of the Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a HuggingFaceReader to load datasets from the Hugging Face Hub. The implementation is well-structured, with a new reader class and modifications to the main read operator to handle Hugging Face URIs. I've identified a critical performance issue in the HuggingFaceReader that could lead to excessive memory usage, a high-severity bug related to handling empty inputs, and a medium-severity suggestion to improve efficiency in the read operator. Overall, a great feature addition with a few areas for improvement.
| # Convert to pandas and then to Ray dataset | ||
| # Add type column if not present | ||
| dataset_dict = hf_dataset.to_dict() | ||
|
|
||
| # Ensure data is in list of dicts format | ||
| if isinstance(dataset_dict, dict) and all( | ||
| isinstance(v, list) for v in dataset_dict.values() | ||
| ): | ||
| # Convert from column-based to row-based format | ||
| num_rows = len(next(iter(dataset_dict.values()))) | ||
| data = [ | ||
| {key: dataset_dict[key][i] for key in dataset_dict} | ||
| for i in range(num_rows) | ||
| ] | ||
| else: | ||
| data = dataset_dict | ||
|
|
||
| # Add type field if not present | ||
| for item in data: | ||
| if "type" not in item: | ||
| item["type"] = "text" | ||
| # Rename text_column to 'content' if different | ||
| if self.text_column != "content" and self.text_column in item: | ||
| item["content"] = item[self.text_column] | ||
|
|
||
| # Apply limit if specified | ||
| if limit: | ||
| data = data[:limit] | ||
|
|
||
| # Create Ray dataset | ||
| ray_ds = ray.data.from_items(data) | ||
|
|
||
| return ray_ds |
There was a problem hiding this comment.
The current implementation loads the entire Hugging Face dataset into memory using hf_dataset.to_dict(). This is highly inefficient for large datasets and can easily lead to out-of-memory errors. It also defeats the purpose of using streaming=True, as it will still try to materialize the entire dataset.
A much more scalable approach is to use ray.data.from_huggingface() to convert the Hugging Face dataset object directly into a Ray dataset. Post-processing steps like adding columns or renaming them should be done using map_batches on the Ray dataset. This will allow for lazy evaluation and distributed processing, which is crucial for handling large-scale data.
# Convert to Ray dataset
if limit:
if streaming:
hf_dataset = hf_dataset.take(limit)
else:
hf_dataset = hf_dataset.select(range(limit))
ray_ds = ray.data.from_huggingface(hf_dataset)
def _process_batch(batch: dict[str, list]) -> dict[str, list]:
if not batch:
return {}
num_rows = len(next(iter(batch.values())))
if "type" not in batch:
batch["type"] = ["text"] * num_rows
if self.text_column != "content" and self.text_column in batch:
batch["content"] = batch[self.text_column]
return batch
# Add type field and rename text_column in a scalable way
ray_ds = ray_ds.map_batches(_process_batch)
return ray_ds| if len(all_dss) == 1: | ||
| combined_ds = all_dss[0] | ||
| else: | ||
| combined_ds = all_dss[0].union(*all_dss[1:]) |
There was a problem hiding this comment.
If input_path is an empty list, all_dss will also be empty. This will cause an IndexError on line 97 when trying to access all_dss[0]. You should handle the case of an empty list of datasets to avoid this crash.
if not all_dss:
import ray
return ray.data.from_items([])
if len(all_dss) == 1:
combined_ds = all_dss[0]
else:
combined_ds = all_dss[0].union(*all_dss[1:])| read_tasks = [] | ||
| for hf_uri in hf_uris: | ||
| # Parse URI format: "huggingface://dataset_name:subset:split" | ||
| uri_part = hf_uri.replace("huggingface://", "") | ||
| hf_reader = HuggingFaceReader(**reader_kwargs) | ||
| ds = hf_reader.read(uri_part) | ||
| read_tasks.append(ds) |
There was a problem hiding this comment.
The current implementation creates a new HuggingFaceReader instance inside a loop for each URI. This is inefficient. The HuggingFaceReader.read method is designed to accept a list of paths, and it will handle loading and combining them. You can improve performance and simplify the code by creating the reader once and passing the list of all URI parts to it.
uri_parts = [uri.replace("huggingface://", "") for uri in hf_uris]
hf_reader = HuggingFaceReader(**reader_kwargs)
ds = hf_reader.read(uri_parts)
read_tasks = [ds]|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a HuggingFaceReader to load datasets from the Hugging Face Hub, with a well-structured implementation and clean integration into the read operator. A critical security concern has been identified regarding the trust_remote_code parameter in HuggingFaceReader. While defaulting to False, its exposure allows for arbitrary code execution if enabled through untrusted configuration files. Additionally, the review identified potential issues like a crash with empty inputs, a minor inefficiency, and style suggestions, along with a small inconsistency in the README.
| text_column: str = "content", | ||
| modalities: Optional[list] = None, | ||
| cache_dir: Optional[str] = None, | ||
| trust_remote_code: bool = False, |
There was a problem hiding this comment.
The HuggingFaceReader class introduces the trust_remote_code parameter which is passed directly to the Hugging Face datasets.load_dataset function. When set to True, this allows the execution of arbitrary Python code contained within the dataset repository (e.g., in the loading script). Since this parameter is exposed to the end-user via the configuration file (through reader_kwargs), it creates a significant risk of Remote Code Execution (RCE) if an attacker can provide or influence the configuration. While the default is False, exposing this dangerous functionality to the configuration without adequate warnings or restrictions is a security concern. Consider removing this parameter from the configuration or implementing a strict allow-list for trusted datasets.
| if isinstance(input_path, list): | ||
| # Handle multiple datasets | ||
| all_dss = [] | ||
| for path in input_path: | ||
| ds = self._load_single_dataset( | ||
| path, | ||
| split=split, | ||
| subset=subset, | ||
| streaming=streaming, | ||
| limit=limit, | ||
| hf_datasets=hf_datasets, | ||
| ) | ||
| all_dss.append(ds) | ||
|
|
||
| if len(all_dss) == 1: | ||
| combined_ds = all_dss[0] | ||
| else: | ||
| combined_ds = all_dss[0].union(*all_dss[1:]) |
There was a problem hiding this comment.
If input_path is an empty list, all_dss will also be empty. This will cause an IndexError on line 98 (or 96) when trying to access all_dss[0]. You should handle the case of an empty input_path list to prevent a crash.
if isinstance(input_path, list):
if not input_path:
import ray
return ray.data.from_items([])
# Handle multiple datasets
all_dss = []
for path in input_path:
ds = self._load_single_dataset(
path,
split=split,
subset=subset,
streaming=streaming,
limit=limit,
hf_datasets=hf_datasets,
)
all_dss.append(ds)
if len(all_dss) == 1:
combined_ds = all_dss[0]
else:
combined_ds = all_dss[0].union(*all_dss[1:])| batch["content"] = batch[self.text_column] | ||
| # Optional: delete old key to avoid duplication | ||
| # del batch[self.text_column] |
There was a problem hiding this comment.
The comment suggests deleting the old column to avoid duplication, which is a good practice for memory efficiency and to avoid confusion. Instead of keeping it as an optional, commented-out step, it's better to perform this action directly. Using pop is an idiomatic way to get the value and remove the key in one step.
batch["content"] = batch.pop(self.text_column)| for hf_uri in hf_uris: | ||
| # Parse URI format: "huggingface://dataset_name:subset:split" | ||
| uri_part = hf_uri.replace("huggingface://", "") | ||
| hf_reader = HuggingFaceReader(**reader_kwargs) | ||
| ds = hf_reader.read(uri_part) | ||
| read_tasks.append(ds) |
There was a problem hiding this comment.
There are a couple of improvements that can be made here for efficiency and code style:
- The
HuggingFaceReaderis instantiated inside the loop, which is inefficient. It can be created once before the loop. str.replace()is used to remove the prefix. Usingstr.removeprefix()(available in Python 3.9+) is more explicit and safer, as it only removes the prefix if it's present at the start of the string.
| for hf_uri in hf_uris: | |
| # Parse URI format: "huggingface://dataset_name:subset:split" | |
| uri_part = hf_uri.replace("huggingface://", "") | |
| hf_reader = HuggingFaceReader(**reader_kwargs) | |
| ds = hf_reader.read(uri_part) | |
| read_tasks.append(ds) | |
| hf_reader = HuggingFaceReader(**reader_kwargs) | |
| for hf_uri in hf_uris: | |
| # Parse URI format: "huggingface://dataset_name:subset:split" | |
| uri_part = hf_uri.removeprefix("huggingface://") | |
| ds = hf_reader.read(uri_part) | |
| read_tasks.append(ds) |
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…raphGen into feat/hf-reader
This PR adds support for Hugging Face datasets as input data source.
The config is as follows: