Skip to content

Discussion: Native Arrow Flight Shuffle Transport for DataFusion-Comet #3552

@Shekharrajak

Description

@Shekharrajak

This discussion proposes adding native Arrow Flight support to DataFusion-Comet's shuffle implementation. Currently, Comet writes shuffle data in Arrow IPC format but relies on Spark's Netty-based BlockManager for network transfer. By implementing Arrow Flight in the Rust native layer, we can achieve true end-to-end zero-copy columnar shuffle, eliminating the JVM boundary crossing for network I/O.

Current Flow:

  1. Rust ShuffleWriter -> Arrow IPC files -> Disk
  2. Spark BlockManager -> Netty -> Remote Executor
  3. JNI -> Rust ShuffleReader -> RecordBatch

Proposed Flow:

  1. Rust ShuffleWriter -> In-memory Arrow buffers
  2. Rust FlightServer -> gRPC/HTTP2 -> Remote Rust FlightClient
  3. Rust ShuffleReader -> RecordBatch (zero-copy)

Ref

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions