Tell us about the bug
InfluxDB3Sink crashes when InfluxDB rejects points outside the retention policy, causing the Kafka consumer to enter an infinite retry loop.
TIME_PRECISION_LEN us and ns len were wrong.
What did you expect to see?
Sink should log dropped points, commit offsets for valid data, and continue processing since the data even if were on influx would be deleted
Current Behavior
Partial writes trigger InfluxDBError → offsets not committed → pipeline reprocesses the same batch → valid data rewritten, old data fails repeatedly.
What version of the library are you using?
3.23.1
Workaround?
Filter data based on InfluxDB retention policy before sinking.
Which adds:
Extra processing overhead
Environment-specific complexity
Risk of inconsistent behavior
- If retention policies change and filter is not updated, valid points might get dropped.
- This could lead to data loss that’s hard to trace back.
Anything else we should know?
Production pipelines can stall, wasting resources and requiring manual intervention.