Skip to content

Conversation

@arrivets
Copy link
Collaborator

@arrivets arrivets commented Jan 7, 2026

When a transaction exceeds the max count of a lane, it is passed down to the next lane instead of being completely rejected.

Summary by CodeRabbit

  • Bug Fixes

    • Improved mempool insertion so transactions can cascade to lower- or higher-priority lanes when a preferred lane is occupied or at capacity, preventing account sequence mismatches.
  • Tests

    • Added tests verifying lane behavior and transaction counts under priority and capacity constraints.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 7, 2026

📝 Walkthrough

Walkthrough

The mempool Insert now implements "trickle down": when a signer exists in a lower-priority lane or a lane is at max capacity, the transaction is tried in lower/higher-priority lanes rather than failing. A new test verifies multi-lane insertion and capacity behavior.

Changes

Cohort / File(s) Summary
Core Logic Enhancement
block/mempool.go
Modified Insert to allow transactions to cascade to other lanes when signer conflicts are detected in lower-priority lanes or when a lane returns ErrMempoolTxMaxCapacity; added errors import and updated comments to document trickle-down semantics.
Test Coverage
block/mempool_test.go
Added test TestMempoolLaneWithMaxTxs plus helpers and types (EmptyTx, NoopEncoder, NoopDecoder, NoopAdapter) to validate lane insertion and capacity/trickle-down behavior.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐇 I nibble through lanes both high and low,
I tiptoe where transactions gently go,
If one is full or a signer sits below,
I hop downstream and let the mempool flow,
A joyful thump — onward we grow!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: allowing transactions to 'fallthrough' to the next lane when the current lane is full, which aligns with the core logic changes in block/mempool.go.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

@arrivets arrivets force-pushed the C-696/dont-reject-txs-that-exceed-count-limit branch from 69978a5 to bccb08e Compare January 7, 2026 09:21
@arrivets arrivets marked this pull request as draft January 7, 2026 09:22
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In @block/mempool_test.go:
- Around line 43-52: NoopAdapter's GetSigners always returns the same signer
which causes signer-conflict logic to trigger; change NoopAdapter.GetSigners to
produce a unique SignerData per transaction (e.g., derive signer bytes from the
tx hash or incrementing counter so each tx yields distinct Signer and Sequence)
so the test exercises capacity/trickle behavior only, and update the tests to
instantiate the adapter as &NoopAdapter{} where lanes expect a pointer.
🧹 Nitpick comments (2)
block/mempool_test.go (2)

33-33: TODO comment indicates incomplete implementation.

The comment suggests SigVerifiableTx needs to be implemented. If this is required for the test to properly validate production behavior, it should be addressed.

Would you like me to open an issue to track this work or help generate the implementation?


54-105: Enhance test to verify per-lane transaction distribution.

The test validates the total count but doesn't verify that transactions are correctly distributed across lanes. Adding assertions for individual lane counts would better demonstrate the trickle-down behavior.

🧪 Proposed enhancement to verify lane distribution
 	require.Equal(t, 2, mempool.CountTx())
+
+	// Verify first tx went to priority lane and second to default lane
+	require.Equal(t, 1, priorityLane.CountTx(), "priority lane should have 1 transaction")
+	require.Equal(t, 1, defaultLane.CountTx(), "default lane should have 1 transaction (trickled down)")
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 92b1cc7 and 69978a5.

📒 Files selected for processing (2)
  • block/mempool.go
  • block/mempool_test.go
🔇 Additional comments (5)
block/mempool.go (2)

5-5: LGTM: Import added for error checking.

The errors package import is necessary for the errors.Is check on line 120.


110-113: Comment accurately describes trickle-down behavior.

The updated comment clearly explains why transactions continue to lower lanes when a signer exists in a lower-priority lane.

block/mempool_test.go (3)

19-21: LGTM: Simple match handler for testing.

The always-true match handler appropriately ensures all transactions match all lanes for the test scenario.


23-31: LGTM: Minimal test transaction implementation.

The EmptyTx struct provides a simple test double with no messages.


35-41: LGTM: No-op encoder/decoder suitable for testing.

These minimal implementations are appropriate for testing mempool insertion logic without serialization concerns.

Comment on lines +43 to +50
type NoopAdapter struct{}

func (n NoopAdapter) GetSigners(tx sdk.Tx) ([]signerextraction.SignerData, error) {
return []signerextraction.SignerData{
{
Signer: []byte("noop"),
Sequence: 0,
},
}, nil
}
Copy link

@coderabbitai coderabbitai bot Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

NoopAdapter returns identical signer for all transactions.

This adapter returns the same signer ("noop", sequence 0) for every transaction. When the test inserts two transactions, both will have the same signer, which triggers the "signer exists in lower priority lane" check (line 109 in mempool.go) rather than purely testing the capacity-based trickle-down behavior.

Consider returning unique signer data (e.g., based on transaction hash or a counter) to isolate capacity testing from signer conflict logic.

♻️ Proposed fix to generate unique signers
-type NoopAdapter struct{}
+type NoopAdapter struct {
+	counter int
+}

-func (n NoopAdapter) GetSigners(tx sdk.Tx) ([]signerextraction.SignerData, error) {
+func (n *NoopAdapter) GetSigners(tx sdk.Tx) ([]signerextraction.SignerData, error) {
+	n.counter++
 	return []signerextraction.SignerData{
 		{
-			Signer:   []byte("noop"),
+			Signer:   []byte(fmt.Sprintf("noop-%d", n.counter)),
 			Sequence: 0,
 		},
 	}, nil
 }

Then update the test to use &NoopAdapter{} instead of NoopAdapter{} in lane configurations.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @block/mempool_test.go around lines 43 - 52, NoopAdapter's GetSigners always
returns the same signer which causes signer-conflict logic to trigger; change
NoopAdapter.GetSigners to produce a unique SignerData per transaction (e.g.,
derive signer bytes from the tx hash or incrementing counter so each tx yields
distinct Signer and Sequence) so the test exercises capacity/trickle behavior
only, and update the tests to instantiate the adapter as &NoopAdapter{} where
lanes expect a pointer.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't affect this particular test.
The first transaction goes in the high-priority lane.
The second transaction should also go in the high-priority lane, but it exceeds the lane limit, so it goes in the default lane.
There are no other transactions.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arrivets, thank you for the clarification! You're right—the capacity limit is the primary driver here. Since the priority lane reaches its MaxTxs=1 limit with the first transaction, the second transaction correctly triggers the capacity-based trickle-down to the default lane, regardless of having the same signer. The test effectively validates the intended behavior.

@arrivets arrivets force-pushed the C-696/dont-reject-txs-that-exceed-count-limit branch from bccb08e to 0595520 Compare January 7, 2026 09:24
@arrivets arrivets marked this pull request as ready for review January 7, 2026 09:24
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 69978a5 and 0595520.

📒 Files selected for processing (2)
  • block/mempool.go
  • block/mempool_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • block/mempool_test.go
🔇 Additional comments (2)
block/mempool.go (2)

5-5: LGTM: Import needed for capacity check.

The errors package import is required for the errors.Is check on line 120.


110-113: LGTM: Comment clarifies trickle-down rationale.

The updated comment clearly explains the trickle-down behavior for signers in lower-priority lanes and the account sequence mismatch concern.

Comment on lines +120 to +124
if errors.Is(err, sdkmempool.ErrMempoolTxMaxCapacity) {
// if the lane is at capacity, we let it trickle down to the
// next lane.
continue laneMatching
}
Copy link

@coderabbitai coderabbitai bot Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Return error when matched lane is at capacity and no other lane accepts the transaction.

The trickle-down implementation at lines 120-124 correctly continues to the next lane when a lane reaches capacity. However, a critical issue remains: when a transaction matches a lane that is at capacity and no subsequent lane accepts it, the function falls through to line 140 and returns nil, silently indicating success despite the transaction never being inserted.

Problematic scenario:

  • TX matches lane 1 → lane 1 at capacity (ErrMempoolTxMaxCapacity) → continue to lane 2
  • Lane 2 doesn't match → loop exits
  • Return nil (line 140) ← transaction lost, but caller thinks insertion succeeded

This differs from the case where a transaction matches no lanes (also returns nil on line 140), which may be acceptable. The fix should distinguish between "no match" and "matched but all lanes full" by tracking whether any lane matched and returning an appropriate error in the latter case.

Additionally, the test coverage (TestMempoolLaneWithMaxTxs) only verifies successful trickle-down where the second lane has unlimited capacity. Add a test case where all matching lanes are at capacity to detect this failure mode.

🔧 Proposed fix to track match status
 func (m *LanedMempool) Insert(ctx context.Context, tx sdk.Tx) (err error) {
 	defer func() {
 		if r := recover(); r != nil {
 			m.logger.Error("panic in Insert", "err", r)
 			err = fmt.Errorf("panic in Insert: %v", r)
 		}
 	}()
 
 	sdkCtx := sdk.UnwrapSDKContext(ctx)
+	var matchedButFull bool
 
 laneMatching:
 	for index, lane := range m.registry {
 		if lane.Match(sdkCtx, tx) {
+			matchedButFull = true
 			signersData, err := lane.SignerExtractor().GetSigners(tx)
 			if err != nil {
 				m.logger.Error("failed to extract signers upon insertion for tx", "tx", tx, "err", err)
 				return nil
 			}
 			for _, signerData := range signersData {
 				if m.txIndex.DoesExistInLowerPriorityLane(signerData.Signer.String(), index) {
 					// If the transaction exists in a lower priority lane, we let
 					// it trickle down until it reaches that lane.
 					// This is because it could cause account sequence mismatches.
+					matchedButFull = false
 					continue laneMatching
 				}
 			}
 
 			err = lane.Insert(ctx, tx)
 			if err != nil {
 				if errors.Is(err, sdkmempool.ErrMempoolTxMaxCapacity) {
 					// if the lane is at capacity, we let it trickle down to the
 					// next lane.
 					continue laneMatching
 				}
 				return err
 			}
 
 			sig := signersData[0]
 			firstSignerIdentifier := sig.Signer.String()
 			firstSignerNonce := sig.Sequence
 
 			for _, signerData := range signersData {
 				m.txIndex.Insert(signerData.Signer.String(), lane.Name(), index, firstSignerIdentifier, firstSignerNonce)
 			}
 
 			return nil
 		}
 	}
 
+	if matchedButFull {
+		return sdkmempool.ErrMempoolTxMaxCapacity
+	}
 	return nil
 }

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default lane matches any transaction, and has unlimited capacity.
So a tx will always be inserted at least in the default lane.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arrivets Thank you for the clarification! That architectural guarantee completely addresses my concern. If the default lane always matches any transaction and has unlimited capacity, then the fallthrough behavior is safe and correct—transactions will never be silently dropped.

The current implementation makes sense given this design constraint, and my suggested fix is unnecessary. Apologies for the confusion!


✏️ Learnings added
Learnt from: arrivets
Repo: InjectiveLabs/block-sdk PR: 8
File: block/mempool.go:120-124
Timestamp: 2026-01-07T09:33:31.338Z
Learning: In the Block SDK laned mempool architecture, the default lane is designed to match any transaction and has unlimited capacity. This guarantees that any transaction will always be inserted into at least the default lane, preventing silent transaction drops when higher-priority lanes are at capacity.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link

@aarmoa aarmoa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change looks good to me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants