-
Notifications
You must be signed in to change notification settings - Fork 0
fix(laned_mempool): support tx fallthrough when lane is full #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v2.5.x-inj
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThe mempool Insert now implements "trickle down": when a signer exists in a lower-priority lane or a lane is at max capacity, the transaction is tried in lower/higher-priority lanes rather than failing. A new test verifies multi-lane insertion and capacity behavior. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
69978a5 to
bccb08e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @block/mempool_test.go:
- Around line 43-52: NoopAdapter's GetSigners always returns the same signer
which causes signer-conflict logic to trigger; change NoopAdapter.GetSigners to
produce a unique SignerData per transaction (e.g., derive signer bytes from the
tx hash or incrementing counter so each tx yields distinct Signer and Sequence)
so the test exercises capacity/trickle behavior only, and update the tests to
instantiate the adapter as &NoopAdapter{} where lanes expect a pointer.
🧹 Nitpick comments (2)
block/mempool_test.go (2)
33-33: TODO comment indicates incomplete implementation.The comment suggests
SigVerifiableTxneeds to be implemented. If this is required for the test to properly validate production behavior, it should be addressed.Would you like me to open an issue to track this work or help generate the implementation?
54-105: Enhance test to verify per-lane transaction distribution.The test validates the total count but doesn't verify that transactions are correctly distributed across lanes. Adding assertions for individual lane counts would better demonstrate the trickle-down behavior.
🧪 Proposed enhancement to verify lane distribution
require.Equal(t, 2, mempool.CountTx()) + + // Verify first tx went to priority lane and second to default lane + require.Equal(t, 1, priorityLane.CountTx(), "priority lane should have 1 transaction") + require.Equal(t, 1, defaultLane.CountTx(), "default lane should have 1 transaction (trickled down)")
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
block/mempool.goblock/mempool_test.go
🔇 Additional comments (5)
block/mempool.go (2)
5-5: LGTM: Import added for error checking.The
errorspackage import is necessary for theerrors.Ischeck on line 120.
110-113: Comment accurately describes trickle-down behavior.The updated comment clearly explains why transactions continue to lower lanes when a signer exists in a lower-priority lane.
block/mempool_test.go (3)
19-21: LGTM: Simple match handler for testing.The always-true match handler appropriately ensures all transactions match all lanes for the test scenario.
23-31: LGTM: Minimal test transaction implementation.The EmptyTx struct provides a simple test double with no messages.
35-41: LGTM: No-op encoder/decoder suitable for testing.These minimal implementations are appropriate for testing mempool insertion logic without serialization concerns.
| type NoopAdapter struct{} | ||
|
|
||
| func (n NoopAdapter) GetSigners(tx sdk.Tx) ([]signerextraction.SignerData, error) { | ||
| return []signerextraction.SignerData{ | ||
| { | ||
| Signer: []byte("noop"), | ||
| Sequence: 0, | ||
| }, | ||
| }, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
NoopAdapter returns identical signer for all transactions.
This adapter returns the same signer ("noop", sequence 0) for every transaction. When the test inserts two transactions, both will have the same signer, which triggers the "signer exists in lower priority lane" check (line 109 in mempool.go) rather than purely testing the capacity-based trickle-down behavior.
Consider returning unique signer data (e.g., based on transaction hash or a counter) to isolate capacity testing from signer conflict logic.
♻️ Proposed fix to generate unique signers
-type NoopAdapter struct{}
+type NoopAdapter struct {
+ counter int
+}
-func (n NoopAdapter) GetSigners(tx sdk.Tx) ([]signerextraction.SignerData, error) {
+func (n *NoopAdapter) GetSigners(tx sdk.Tx) ([]signerextraction.SignerData, error) {
+ n.counter++
return []signerextraction.SignerData{
{
- Signer: []byte("noop"),
+ Signer: []byte(fmt.Sprintf("noop-%d", n.counter)),
Sequence: 0,
},
}, nil
}Then update the test to use &NoopAdapter{} instead of NoopAdapter{} in lane configurations.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @block/mempool_test.go around lines 43 - 52, NoopAdapter's GetSigners always
returns the same signer which causes signer-conflict logic to trigger; change
NoopAdapter.GetSigners to produce a unique SignerData per transaction (e.g.,
derive signer bytes from the tx hash or incrementing counter so each tx yields
distinct Signer and Sequence) so the test exercises capacity/trickle behavior
only, and update the tests to instantiate the adapter as &NoopAdapter{} where
lanes expect a pointer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't affect this particular test.
The first transaction goes in the high-priority lane.
The second transaction should also go in the high-priority lane, but it exceeds the lane limit, so it goes in the default lane.
There are no other transactions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arrivets, thank you for the clarification! You're right—the capacity limit is the primary driver here. Since the priority lane reaches its MaxTxs=1 limit with the first transaction, the second transaction correctly triggers the capacity-based trickle-down to the default lane, regardless of having the same signer. The test effectively validates the intended behavior.
bccb08e to
0595520
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
block/mempool.goblock/mempool_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- block/mempool_test.go
🔇 Additional comments (2)
block/mempool.go (2)
5-5: LGTM: Import needed for capacity check.The
errorspackage import is required for theerrors.Ischeck on line 120.
110-113: LGTM: Comment clarifies trickle-down rationale.The updated comment clearly explains the trickle-down behavior for signers in lower-priority lanes and the account sequence mismatch concern.
| if errors.Is(err, sdkmempool.ErrMempoolTxMaxCapacity) { | ||
| // if the lane is at capacity, we let it trickle down to the | ||
| // next lane. | ||
| continue laneMatching | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return error when matched lane is at capacity and no other lane accepts the transaction.
The trickle-down implementation at lines 120-124 correctly continues to the next lane when a lane reaches capacity. However, a critical issue remains: when a transaction matches a lane that is at capacity and no subsequent lane accepts it, the function falls through to line 140 and returns nil, silently indicating success despite the transaction never being inserted.
Problematic scenario:
- TX matches lane 1 → lane 1 at capacity (
ErrMempoolTxMaxCapacity) → continue to lane 2 - Lane 2 doesn't match → loop exits
- Return nil (line 140) ← transaction lost, but caller thinks insertion succeeded
This differs from the case where a transaction matches no lanes (also returns nil on line 140), which may be acceptable. The fix should distinguish between "no match" and "matched but all lanes full" by tracking whether any lane matched and returning an appropriate error in the latter case.
Additionally, the test coverage (TestMempoolLaneWithMaxTxs) only verifies successful trickle-down where the second lane has unlimited capacity. Add a test case where all matching lanes are at capacity to detect this failure mode.
🔧 Proposed fix to track match status
func (m *LanedMempool) Insert(ctx context.Context, tx sdk.Tx) (err error) {
defer func() {
if r := recover(); r != nil {
m.logger.Error("panic in Insert", "err", r)
err = fmt.Errorf("panic in Insert: %v", r)
}
}()
sdkCtx := sdk.UnwrapSDKContext(ctx)
+ var matchedButFull bool
laneMatching:
for index, lane := range m.registry {
if lane.Match(sdkCtx, tx) {
+ matchedButFull = true
signersData, err := lane.SignerExtractor().GetSigners(tx)
if err != nil {
m.logger.Error("failed to extract signers upon insertion for tx", "tx", tx, "err", err)
return nil
}
for _, signerData := range signersData {
if m.txIndex.DoesExistInLowerPriorityLane(signerData.Signer.String(), index) {
// If the transaction exists in a lower priority lane, we let
// it trickle down until it reaches that lane.
// This is because it could cause account sequence mismatches.
+ matchedButFull = false
continue laneMatching
}
}
err = lane.Insert(ctx, tx)
if err != nil {
if errors.Is(err, sdkmempool.ErrMempoolTxMaxCapacity) {
// if the lane is at capacity, we let it trickle down to the
// next lane.
continue laneMatching
}
return err
}
sig := signersData[0]
firstSignerIdentifier := sig.Signer.String()
firstSignerNonce := sig.Sequence
for _, signerData := range signersData {
m.txIndex.Insert(signerData.Signer.String(), lane.Name(), index, firstSignerIdentifier, firstSignerNonce)
}
return nil
}
}
+ if matchedButFull {
+ return sdkmempool.ErrMempoolTxMaxCapacity
+ }
return nil
}Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default lane matches any transaction, and has unlimited capacity.
So a tx will always be inserted at least in the default lane.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arrivets Thank you for the clarification! That architectural guarantee completely addresses my concern. If the default lane always matches any transaction and has unlimited capacity, then the fallthrough behavior is safe and correct—transactions will never be silently dropped.
The current implementation makes sense given this design constraint, and my suggested fix is unnecessary. Apologies for the confusion!
✏️ Learnings added
Learnt from: arrivets
Repo: InjectiveLabs/block-sdk PR: 8
File: block/mempool.go:120-124
Timestamp: 2026-01-07T09:33:31.338Z
Learning: In the Block SDK laned mempool architecture, the default lane is designed to match any transaction and has unlimited capacity. This guarantees that any transaction will always be inserted into at least the default lane, preventing silent transaction drops when higher-priority lanes are at capacity.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
aarmoa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change looks good to me
When a transaction exceeds the max count of a lane, it is passed down to the next lane instead of being completely rejected.
Summary by CodeRabbit
Bug Fixes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.