Skip to content

Conversation

@drempapis
Copy link
Contributor

This is WIP, and its intention is to implement chunked streaming for the fetch phase to reduce memory pressure during large result set fetches. Instead of accumulating all search hits in memory on the data node before sending to the coordinator, hits are now streamed in configurable chunks as they are produced.

This follows the paradigm of TransportRepositoryVerifyIntegrityCoordinationAction but it streams only between the coordinator and data-nodes.

Coordinator's FetchPhaseResponseStream integrates circuit breaker accounting for all shards' incoming traffic to prevent OOM.

* +-------------------+                  +-------------+                          +-----------+
* | FetchSearchPhase  |                  | Coordinator |                          | Data Node |
* +-------------------+                  +-------------+                          +-----------+
*      |                                     |                                          |
*      |- execute(request, dataNode)-------->|                                          | --[Initialization Phase]
*      |                                     |---[ShardFetchRequest]------------------->|
*      |                                     |                                          |
*      |                                     |                                          | --[Chunked Streaming Phase]
*      |                                     |<---[START_RESPONSE chunk]----------------|
*      |                                     |----[ACK (Empty)]------------------------>|
*      |                                     |                                          | --[Process data]
*      |                                     |<---[HITS chunk 1]------------------------|
*      |                                     |  [Accumulate in stream]                  |
*      |                                     |----[ACK (Empty)]------------------------>|
*      |                                     |                                          | --[Process more data]
*      |                                     |<---[HITS chunk 2]------------------------|
*      |                                     |  [Accumulate in stream]                  |
*      |                                     |----[ACK (Empty)]------------------------>|
*      |                                     |                                          |
*      |                                     |<--FetchSearchResult----------------------| --[Completion Phase]
*      |                                     |   (final response)                       |
*      |                                     |                                          |
*      |                                     |--[Build final result]                    |
*      |                                     |  (from accumulated chunks)               |
*      |<-- FetchSearchResult (complete) ----|                                          |

Copy link
Contributor

@DaveCTurner DaveCTurner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it :)

totalDocs
);

writer.writeResponseChunk(chunk, ActionListener.running(() -> {}));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't chased this through completely but ActionListener.running(() -> {}) suggests you're sending all the chunks at once without any kind of backpressure mechanism. It's a good idea to use some amount of parallelism here but please make sure there's a (configurable?) bound on the amount of in-flight data.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also you need to handle a failure here - e.g. a network disconnect might deserve a retry, or else failing the rest of the search, but there's no point in carrying on with the rest of the process regardless. The coordinating node might not even know that one of these requests failed in the case of a network disconnect, so we have to send back a failure response at the end of the process.

Comment on lines +62 to +63
* | |<---[START_RESPONSE chunk]----------------|
* | |----[ACK (Empty)]------------------------>|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this happen concurrently with sending the hits chunks? Could this be combined with the first chunk of hits?

* | | [Accumulate in stream] |
* | |----[ACK (Empty)]------------------------>|
* | | |
* | |<--FetchSearchResult----------------------| --[Completion Phase]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we send the last chunk in this response message? That way e.g. if the response fits into a single chunk then it's just one network round trip still.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants