Skip to content

feat: add huggingface reader#172

Merged
ChenZiHong-Gavin merged 6 commits intomainfrom
feat/hf-reader
Feb 4, 2026
Merged

feat: add huggingface reader#172
ChenZiHong-Gavin merged 6 commits intomainfrom
feat/hf-reader

Conversation

@ChenZiHong-Gavin
Copy link
Collaborator

@ChenZiHong-Gavin ChenZiHong-Gavin commented Feb 4, 2026

This PR adds support for Hugging Face datasets as input data source.
The config is as follows:

nodes:
  - id: read_hf_dataset # Read from Hugging Face Hub
    op_name: read
    type: source
    dependencies: []
    params:
        input_path:
          - huggingface://wikitext:wikitext-103-v1:train # Format: huggingface://dataset_name:subset:split
        # Optional parameters for HuggingFaceReader:
        text_column: text # Column name containing text content (default: content)
        # cache_dir: /path/to/cache # Optional: directory to cache downloaded datasets
        # trust_remote_code: false # Optional: whether to trust remote code in datasets

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ChenZiHong-Gavin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the data ingestion capabilities of the graphgen framework by adding native support for Hugging Face datasets. It provides a dedicated reader and integrates it into the existing data loading mechanism, allowing users to easily access and process a vast array of publicly available datasets for their graph generation tasks. The changes enhance the flexibility of data source specification and streamline the process of incorporating external data into graphgen workflows.

Highlights

  • New HuggingFaceReader: Introduced a new HuggingFaceReader class that allows direct ingestion of datasets from the Hugging Face Hub, supporting various dataset formats and configurations.
  • Unified Data Reading: The main read operator has been refactored to seamlessly handle both local file paths and new huggingface:// URIs, providing a single, flexible entry point for data loading.
  • Example Workflow: Added a new example script and configuration file (huggingface_config.yaml) demonstrating how to integrate a Hugging Face dataset into a full graphgen pipeline, from reading to knowledge graph generation and aggregated QA.
Changelog
  • examples/generate/generate_aggregated_qa/generate_aggregated_from_hf.sh
    • Added a new shell script to run graphgen with a Hugging Face dataset configuration.
  • examples/generate/generate_aggregated_qa/huggingface_config.yaml
    • Added a new YAML configuration file defining a graphgen pipeline that reads from Hugging Face, chunks documents, builds a knowledge graph, generates quizzes, judges them, partitions, and generates aggregated output.
  • graphgen/models/init.py
    • Imported HuggingFaceReader and added it to the module's __all__ export list.
  • graphgen/models/reader/init.py
    • Imported HuggingFaceReader from its new module.
  • graphgen/models/reader/huggingface_reader.py
    • Added the HuggingFaceReader class, inheriting from BaseReader, to facilitate reading datasets from Hugging Face Hub.
    • Implemented logic to parse Hugging Face dataset identifiers (e.g., dataset_name:subset:split).
    • Included functionality to convert Hugging Face datasets into Ray datasets, ensuring type and content columns are present.
    • Added error handling for missing datasets package dependency.
  • graphgen/operators/read/read.py
    • Imported HuggingFaceReader for use in the unified read operator.
    • Introduced _process_huggingface_datasets helper function to handle reading from Hugging Face URIs.
    • Refactored existing local file reading logic into a new _process_local_files helper function.
    • Added _combine_datasets helper function to unify the combination and post-processing of Ray datasets from different sources.
    • Updated the main read function to differentiate between Hugging Face URIs and local file paths, delegating processing to the appropriate helper functions.
    • Updated docstrings to reflect the new support for Hugging Face dataset URIs in the input_path parameter.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a HuggingFaceReader to load datasets from the Hugging Face Hub. The implementation is well-structured, with a new reader class and modifications to the main read operator to handle Hugging Face URIs. I've identified a critical performance issue in the HuggingFaceReader that could lead to excessive memory usage, a high-severity bug related to handling empty inputs, and a medium-severity suggestion to improve efficiency in the read operator. Overall, a great feature addition with a few areas for improvement.

Comment on lines 162 to 194
# Convert to pandas and then to Ray dataset
# Add type column if not present
dataset_dict = hf_dataset.to_dict()

# Ensure data is in list of dicts format
if isinstance(dataset_dict, dict) and all(
isinstance(v, list) for v in dataset_dict.values()
):
# Convert from column-based to row-based format
num_rows = len(next(iter(dataset_dict.values())))
data = [
{key: dataset_dict[key][i] for key in dataset_dict}
for i in range(num_rows)
]
else:
data = dataset_dict

# Add type field if not present
for item in data:
if "type" not in item:
item["type"] = "text"
# Rename text_column to 'content' if different
if self.text_column != "content" and self.text_column in item:
item["content"] = item[self.text_column]

# Apply limit if specified
if limit:
data = data[:limit]

# Create Ray dataset
ray_ds = ray.data.from_items(data)

return ray_ds
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current implementation loads the entire Hugging Face dataset into memory using hf_dataset.to_dict(). This is highly inefficient for large datasets and can easily lead to out-of-memory errors. It also defeats the purpose of using streaming=True, as it will still try to materialize the entire dataset.

A much more scalable approach is to use ray.data.from_huggingface() to convert the Hugging Face dataset object directly into a Ray dataset. Post-processing steps like adding columns or renaming them should be done using map_batches on the Ray dataset. This will allow for lazy evaluation and distributed processing, which is crucial for handling large-scale data.

        # Convert to Ray dataset
        if limit:
            if streaming:
                hf_dataset = hf_dataset.take(limit)
            else:
                hf_dataset = hf_dataset.select(range(limit))

        ray_ds = ray.data.from_huggingface(hf_dataset)

        def _process_batch(batch: dict[str, list]) -> dict[str, list]:
            if not batch:
                return {}
            num_rows = len(next(iter(batch.values())))
            if "type" not in batch:
                batch["type"] = ["text"] * num_rows

            if self.text_column != "content" and self.text_column in batch:
                batch["content"] = batch[self.text_column]

            return batch

        # Add type field and rename text_column in a scalable way
        ray_ds = ray_ds.map_batches(_process_batch)

        return ray_ds

Comment on lines +94 to +97
if len(all_dss) == 1:
combined_ds = all_dss[0]
else:
combined_ds = all_dss[0].union(*all_dss[1:])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

If input_path is an empty list, all_dss will also be empty. This will cause an IndexError on line 97 when trying to access all_dss[0]. You should handle the case of an empty list of datasets to avoid this crash.

            if not all_dss:
                import ray

                return ray.data.from_items([])

            if len(all_dss) == 1:
                combined_ds = all_dss[0]
            else:
                combined_ds = all_dss[0].union(*all_dss[1:])

Comment on lines 58 to 64
read_tasks = []
for hf_uri in hf_uris:
# Parse URI format: "huggingface://dataset_name:subset:split"
uri_part = hf_uri.replace("huggingface://", "")
hf_reader = HuggingFaceReader(**reader_kwargs)
ds = hf_reader.read(uri_part)
read_tasks.append(ds)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation creates a new HuggingFaceReader instance inside a loop for each URI. This is inefficient. The HuggingFaceReader.read method is designed to accept a list of paths, and it will handle loading and combining them. You can improve performance and simplify the code by creating the reader once and passing the list of all URI parts to it.

    uri_parts = [uri.replace("huggingface://", "") for uri in hf_uris]
    hf_reader = HuggingFaceReader(**reader_kwargs)
    ds = hf_reader.read(uri_parts)
    read_tasks = [ds]

@github-actions github-actions bot added the documentation Improvements or additions to documentation label Feb 4, 2026
from graphgen.bases.base_reader import BaseReader

if TYPE_CHECKING:
import numpy as np

if TYPE_CHECKING:
import numpy as np
import ray
@ChenZiHong-Gavin
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a HuggingFaceReader to load datasets from the Hugging Face Hub, with a well-structured implementation and clean integration into the read operator. A critical security concern has been identified regarding the trust_remote_code parameter in HuggingFaceReader. While defaulting to False, its exposure allows for arbitrary code execution if enabled through untrusted configuration files. Additionally, the review identified potential issues like a crash with empty inputs, a minor inefficiency, and style suggestions, along with a small inconsistency in the README.

text_column: str = "content",
modalities: Optional[list] = None,
cache_dir: Optional[str] = None,
trust_remote_code: bool = False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The HuggingFaceReader class introduces the trust_remote_code parameter which is passed directly to the Hugging Face datasets.load_dataset function. When set to True, this allows the execution of arbitrary Python code contained within the dataset repository (e.g., in the loading script). Since this parameter is exposed to the end-user via the configuration file (through reader_kwargs), it creates a significant risk of Remote Code Execution (RCE) if an attacker can provide or influence the configuration. While the default is False, exposing this dangerous functionality to the configuration without adequate warnings or restrictions is a security concern. Consider removing this parameter from the configuration or implementing a strict allow-list for trusted datasets.

Comment on lines +81 to +98
if isinstance(input_path, list):
# Handle multiple datasets
all_dss = []
for path in input_path:
ds = self._load_single_dataset(
path,
split=split,
subset=subset,
streaming=streaming,
limit=limit,
hf_datasets=hf_datasets,
)
all_dss.append(ds)

if len(all_dss) == 1:
combined_ds = all_dss[0]
else:
combined_ds = all_dss[0].union(*all_dss[1:])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

If input_path is an empty list, all_dss will also be empty. This will cause an IndexError on line 98 (or 96) when trying to access all_dss[0]. You should handle the case of an empty input_path list to prevent a crash.

        if isinstance(input_path, list):
            if not input_path:
                import ray

                return ray.data.from_items([])

            # Handle multiple datasets
            all_dss = []
            for path in input_path:
                ds = self._load_single_dataset(
                    path,
                    split=split,
                    subset=subset,
                    streaming=streaming,
                    limit=limit,
                    hf_datasets=hf_datasets,
                )
                all_dss.append(ds)

            if len(all_dss) == 1:
                combined_ds = all_dss[0]
            else:
                combined_ds = all_dss[0].union(*all_dss[1:])

Comment on lines 194 to 196
batch["content"] = batch[self.text_column]
# Optional: delete old key to avoid duplication
# del batch[self.text_column]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment suggests deleting the old column to avoid duplication, which is a good practice for memory efficiency and to avoid confusion. Instead of keeping it as an optional, commented-out step, it's better to perform this action directly. Using pop is an idiomatic way to get the value and remove the key in one step.

                batch["content"] = batch.pop(self.text_column)

Comment on lines 59 to 64
for hf_uri in hf_uris:
# Parse URI format: "huggingface://dataset_name:subset:split"
uri_part = hf_uri.replace("huggingface://", "")
hf_reader = HuggingFaceReader(**reader_kwargs)
ds = hf_reader.read(uri_part)
read_tasks.append(ds)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are a couple of improvements that can be made here for efficiency and code style:

  1. The HuggingFaceReader is instantiated inside the loop, which is inefficient. It can be created once before the loop.
  2. str.replace() is used to remove the prefix. Using str.removeprefix() (available in Python 3.9+) is more explicit and safer, as it only removes the prefix if it's present at the start of the string.
Suggested change
for hf_uri in hf_uris:
# Parse URI format: "huggingface://dataset_name:subset:split"
uri_part = hf_uri.replace("huggingface://", "")
hf_reader = HuggingFaceReader(**reader_kwargs)
ds = hf_reader.read(uri_part)
read_tasks.append(ds)
hf_reader = HuggingFaceReader(**reader_kwargs)
for hf_uri in hf_uris:
# Parse URI format: "huggingface://dataset_name:subset:split"
uri_part = hf_uri.removeprefix("huggingface://")
ds = hf_reader.read(uri_part)
read_tasks.append(ds)

ChenZiHong-Gavin and others added 3 commits February 4, 2026 17:48
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@ChenZiHong-Gavin ChenZiHong-Gavin changed the title feat: add graphgen reader feat: add huggingface reader Feb 4, 2026
@ChenZiHong-Gavin ChenZiHong-Gavin merged commit 9facd1f into main Feb 4, 2026
7 checks passed
@ChenZiHong-Gavin ChenZiHong-Gavin deleted the feat/hf-reader branch February 4, 2026 10:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core documentation Improvements or additions to documentation examples

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant