Skip to content

Conversation

@scotthart
Copy link
Member

No description provided.

@gemini-code-assist
Copy link

Summary of Changes

Hello @scotthart, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request prototypes a dynamic channel pooling system for Bigtable clients, designed to enhance load balancing and resource management. It introduces a "random two least used" strategy for selecting gRPC channels, aiming to distribute RPCs more efficiently across available connections. The changes involve new internal components for channel management, modifications to the stub factory to integrate this new strategy, and extensive debugging output to monitor its behavior.

Highlights

  • Dynamic Channel Pool Introduction: A new DynamicChannelPool mechanism is introduced to manage Bigtable stubs, allowing for dynamic scaling of channels based on load.
  • Random Two Least Used Strategy: A BigtableRandomTwoLeastUsed decorator is implemented, which selects a channel from the pool using the "random two least used" load balancing strategy.
  • Integration with Stub Factory: The Bigtable stub factory is updated to utilize this new BigtableRandomTwoLeastUsed decorator, replacing the previous round-robin approach for channel selection.
  • Channel Activity Tracking: StubWrapper and StreamingReadRpcTracking/AsyncStreamingReadRpcTracking classes are added to track the number of outstanding RPCs on each channel, which is crucial for the "least used" strategy.
  • Debugging Output: Numerous std::cout statements have been added across various Bigtable internal files to trace the execution flow, likely for debugging and understanding the new channel pooling behavior.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a prototype for a dynamic channel pool for Bigtable, using a "power of two choices" load balancing strategy. This is a valuable addition. The overall structure is sound, but I've identified several critical issues related to thread safety and potential crashes that need to be addressed. Additionally, there are numerous debugging statements that should be removed, and I've suggested refactoring to reduce code duplication, in line with the repository's style guide.

Comment on lines 35 to 59
class StubWrapper {
public:
explicit StubWrapper(std::shared_ptr<T> stub)
: stub_(std::move(stub)), outstanding_rpcs_(0) {}

int outstanding_rpcs(std::unique_lock<std::mutex> const&) const {
return outstanding_rpcs_;
}

std::shared_ptr<T> AcquireStub() {
std::unique_lock<std::mutex> lk(mu_);
++outstanding_rpcs_;
return stub_;
}

void ReleaseStub() {
std::unique_lock<std::mutex> lk(mu_);
--outstanding_rpcs_;
}

private:
mutable std::mutex mu_;
std::shared_ptr<T> stub_;
int outstanding_rpcs_;
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The outstanding_rpcs_ member is accessed in a non-thread-safe manner. It is read in outstanding_rpcs() without a lock, while being modified in AcquireStub() and ReleaseStub() under a lock. This creates a data race.

I recommend making outstanding_rpcs_ a std::atomic<int> to ensure thread-safe operations without needing a mutex for this counter. This would also simplify the StubWrapper class by removing its mutex.

class StubWrapper {
 public:
  explicit StubWrapper(std::shared_ptr<T> stub)
      : stub_(std::move(stub)) {}

  int outstanding_rpcs() const {
    return outstanding_rpcs_.load(std::memory_order_relaxed);
  }

  std::shared_ptr<T> AcquireStub() {
    ++outstanding_rpcs_;
    return stub_;
  }

  void ReleaseStub() {
    --outstanding_rpcs_;
  }

 private:
  std::shared_ptr<T> stub_;
  std::atomic<int> outstanding_rpcs_{0};
};

Comment on lines 222 to 244
void CheckPoolChannelHealth(std::unique_lock<std::mutex> const&) {
auto average_rpc_per_channel =
std::accumulate(channels_.begin(), channels_.end(),
[](std::shared_ptr<internal::StubWrapper<T>> const& s) {
return s->outstanding_rpcs();
}) /
channels_.size();
if (average_rpc_per_channel <
sizing_policy_.minimum_average_outstanding_rpcs_per_channel) {
// TODO(sdhart): Is there a downside to always removing the most recently
// created channel?
decommissioned_channels_.push_back(std::move(channels_.back()));
channels_.pop_back();
ScheduleRemoveChannel();
}
if (average_rpc_per_channel >
sizing_policy_.maximum_average_outstanding_rpcs_per_channel) {
// Channel/stub creation is expensive, instead of making the current RPC
// wait on this, use an existing channel right now, and schedule a channel
// to be added.
ScheduleAddChannel();
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This function has several critical issues:

  1. The call to std::accumulate is incorrect. It's missing the initial value and uses a unary operation instead of the required binary operation. This will not compile.
  2. There is a potential division-by-zero if channels_ is empty. The code should guard against this.
  3. The call to s->outstanding_rpcs() is not thread-safe, as pointed out in another comment. This can lead to data races.

I've provided a suggestion that fixes the std::accumulate usage and the division-by-zero. This assumes the outstanding_rpcs() method is made thread-safe.

  void CheckPoolChannelHealth(std::unique_lock<std::mutex> const&) {
    if (channels_.empty()) return;
    auto const total_rpcs =
        std::accumulate(channels_.begin(), channels_.end(), std::size_t{0},
                        [](std::size_t current,
                           std::shared_ptr<internal::StubWrapper<T>> const& s) {
                          return current + s->outstanding_rpcs();
                        });
    auto const average_rpc_per_channel = total_rpcs / channels_.size();
    if (average_rpc_per_channel <
        sizing_policy_.minimum_average_outstanding_rpcs_per_channel) {
      // TODO(sdhart): Is there a downside to always removing the most recently
      // created channel?
      decommissioned_channels_.push_back(std::move(channels_.back()));
      channels_.pop_back();
      ScheduleRemoveChannel();
    }
    if (average_rpc_per_channel >
        sizing_policy_.maximum_average_outstanding_rpcs_per_channel) {
      // Channel/stub creation is expensive, instead of making the current RPC
      // wait on this, use an existing channel right now, and schedule a channel
      // to be added.
      ScheduleAddChannel();
    }
  }

Comment on lines 139 to 158
std::shared_ptr<StubWrapper<T>> GetChannelRandomTwoLeastUsed() {
std::cout << __PRETTY_FUNCTION__ << std::endl;
std::unique_lock<std::mutex> lk(mu_);

std::cout << __PRETTY_FUNCTION__ << ": channels_size()=" << channels_.size()
<< std::endl;
// TODO: check if resize is needed.

std::vector<std::size_t> indices(channels_.size());
// TODO(sdhart): Maybe use iota on iterators instead of indices
std::iota(indices.begin(), indices.end(), 0);
std::shuffle(indices.begin(), indices.end(), rng_);

std::shared_ptr<StubWrapper<T>> channel_1 = channels_[indices[0]];
std::shared_ptr<StubWrapper<T>> channel_2 = channels_[indices[1]];

return channel_1->outstanding_rpcs(lk) < channel_2->outstanding_rpcs(lk)
? channel_1
: channel_2;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This function accesses channels_[indices[1]] without verifying that the pool contains at least two channels. If channels_.size() is less than 2, this will lead to an out-of-bounds access and a program crash. Please add checks to handle cases where the pool size is 0 or 1.

  std::shared_ptr<StubWrapper<T>> GetChannelRandomTwoLeastUsed() {
    std::cout << __PRETTY_FUNCTION__ << std::endl;
    std::unique_lock<std::mutex> lk(mu_);

    std::cout << __PRETTY_FUNCTION__ << ": channels_size()=" << channels_.size()
              << std::endl;
    // TODO: check if resize is needed.

    if (channels_.empty()) return nullptr;
    if (channels_.size() == 1) return channels_[0];

    std::vector<std::size_t> indices(channels_.size());
    // TODO(sdhart): Maybe use iota on iterators instead of indices
    std::iota(indices.begin(), indices.end(), 0);
    std::shuffle(indices.begin(), indices.end(), rng_);

    std::shared_ptr<StubWrapper<T>> channel_1 = channels_[indices[0]];
    std::shared_ptr<StubWrapper<T>> channel_2 = channels_[indices[1]];

    return channel_1->outstanding_rpcs(lk) < channel_2->outstanding_rpcs(lk)
               ? channel_1
               : channel_2;
  }

}

void TableIntegrationTest::SetUp() {
std::cout << __PRETTY_FUNCTION__ << std::endl;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This std::cout statement appears to be for debugging and should be removed.

DefaultBigtableStub::ReadRows(
std::shared_ptr<grpc::ClientContext> context, Options const&,
google::bigtable::v2::ReadRowsRequest const& request) {
std::cout << __PRETTY_FUNCTION__ << std::endl;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This std::cout statement and the other one in this file appear to be for debugging. They should be removed.

private:
std::shared_ptr<internal::StubWrapper<BigtableStub>> Child();

// std::mutex mu_;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This commented-out member variable should be removed.

Comment on lines 127 to 137
// std::shared_ptr<StubWrapper<T>> GetChannel(
// std::unique_lock<std::mutex> const&) {
// // TODO: check for empty
// return channels_[0];
// }
//
// std::shared_ptr<StubWrapper<T>> GetChannel(
// std::unique_lock<std::mutex> const&, std::size_t index) {
// // TODO: bounds check
// return channels_[index];
// }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of commented-out code should be removed.

Comment on lines 341 to 351
// std::unique_lock<std::mutex> lk(mu_);
// std::vector<std::size_t> indices(pool_->size(lk) - 1);
// // TODO(sdhart): Maybe use iota on iterators instead of indices
// std::iota(indices.begin(), indices.end(), 0);
// std::shuffle(indices.begin(), indices.end(), rng_);
// auto channel_1 = pool_->GetChannel(lk, indices[0]);
// auto channel_2 = pool_->GetChannel(lk, indices[1]);
//
// return channel_1->outstanding_rpcs(lk) < channel_2->outstanding_rpcs(lk)
// ? channel_1
// : channel_2;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of commented-out code should be removed.

Comment on lines 208 to 212
std::sort(decommissioned_channels_.begin(), decommissioned_channels_.end(),
[](std::shared_ptr<StubWrapper<T>> const& a,
std::shared_ptr<StubWrapper<T>> b) {
return a->outstanding_rpcs() > b->outstanding_rpcs();
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The lambda for std::sort takes its second argument b by value, which causes an unnecessary copy of a std::shared_ptr. It should be taken by const& to avoid this overhead.

              [](std::shared_ptr<StubWrapper<T>> const& a,
                 std::shared_ptr<StubWrapper<T>> const& b) {
                return a->outstanding_rpcs() > b->outstanding_rpcs();
              });

Comment on lines 87 to 335
std::unique_ptr<google::cloud::internal::StreamingReadRpc<
google::bigtable::v2::ReadRowsResponse>>
BigtableRandomTwoLeastUsed::ReadRows(
std::shared_ptr<grpc::ClientContext> context, Options const& options,
google::bigtable::v2::ReadRowsRequest const& request) {
std::cout << __PRETTY_FUNCTION__ << std::endl;
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->ReadRows(std::move(context), options, request);
std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child;
auto release_fn = [weak = std::move(weak)]() {
auto child = weak.lock();
if (child) child->ReleaseStub();
};
return std::make_unique<
StreamingReadRpcTracking<google::bigtable::v2::ReadRowsResponse>>(
std::move(result), std::move(release_fn));
}

std::unique_ptr<google::cloud::internal::StreamingReadRpc<
google::bigtable::v2::SampleRowKeysResponse>>
BigtableRandomTwoLeastUsed::SampleRowKeys(
std::shared_ptr<grpc::ClientContext> context, Options const& options,
google::bigtable::v2::SampleRowKeysRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->SampleRowKeys(std::move(context), options, request);
std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child;
auto release_fn = [weak = std::move(weak)]() {
auto child = weak.lock();
if (child) child->ReleaseStub();
};
return std::make_unique<
StreamingReadRpcTracking<google::bigtable::v2::SampleRowKeysResponse>>(
std::move(result), std::move(release_fn));
}

StatusOr<google::bigtable::v2::MutateRowResponse>
BigtableRandomTwoLeastUsed::MutateRow(
grpc::ClientContext& context, Options const& options,
google::bigtable::v2::MutateRowRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->MutateRow(context, options, request);
child->ReleaseStub();
return result;
}

std::unique_ptr<google::cloud::internal::StreamingReadRpc<
google::bigtable::v2::MutateRowsResponse>>
BigtableRandomTwoLeastUsed::MutateRows(
std::shared_ptr<grpc::ClientContext> context, Options const& options,
google::bigtable::v2::MutateRowsRequest const& request) {
std::cout << __PRETTY_FUNCTION__ << std::endl;
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->MutateRows(std::move(context), options, request);
std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child;
auto release_fn = [weak = std::move(weak)]() {
auto child = weak.lock();
if (child) child->ReleaseStub();
};
return std::make_unique<
StreamingReadRpcTracking<google::bigtable::v2::MutateRowsResponse>>(
std::move(result), std::move(release_fn));
}

StatusOr<google::bigtable::v2::CheckAndMutateRowResponse>
BigtableRandomTwoLeastUsed::CheckAndMutateRow(
grpc::ClientContext& context, Options const& options,
google::bigtable::v2::CheckAndMutateRowRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->CheckAndMutateRow(context, options, request);
child->ReleaseStub();
return result;
}

StatusOr<google::bigtable::v2::PingAndWarmResponse>
BigtableRandomTwoLeastUsed::PingAndWarm(
grpc::ClientContext& context, Options const& options,
google::bigtable::v2::PingAndWarmRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->PingAndWarm(context, options, request);
child->ReleaseStub();
return result;
}

StatusOr<google::bigtable::v2::ReadModifyWriteRowResponse>
BigtableRandomTwoLeastUsed::ReadModifyWriteRow(
grpc::ClientContext& context, Options const& options,
google::bigtable::v2::ReadModifyWriteRowRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->ReadModifyWriteRow(context, options, request);
child->ReleaseStub();
return result;
}

StatusOr<google::bigtable::v2::PrepareQueryResponse>
BigtableRandomTwoLeastUsed::PrepareQuery(
grpc::ClientContext& context, Options const& options,
google::bigtable::v2::PrepareQueryRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->PrepareQuery(context, options, request);
child->ReleaseStub();
return result;
}

std::unique_ptr<google::cloud::internal::StreamingReadRpc<
google::bigtable::v2::ExecuteQueryResponse>>
BigtableRandomTwoLeastUsed::ExecuteQuery(
std::shared_ptr<grpc::ClientContext> context, Options const& options,
google::bigtable::v2::ExecuteQueryRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->ExecuteQuery(std::move(context), options, request);
std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child;
auto release_fn = [weak = std::move(weak)]() {
auto child = weak.lock();
if (child) child->ReleaseStub();
};
return std::make_unique<
StreamingReadRpcTracking<google::bigtable::v2::ExecuteQueryResponse>>(
std::move(result), std::move(release_fn));
}

std::unique_ptr<google::cloud::internal::AsyncStreamingReadRpc<
google::bigtable::v2::ReadRowsResponse>>
BigtableRandomTwoLeastUsed::AsyncReadRows(
google::cloud::CompletionQueue const& cq,
std::shared_ptr<grpc::ClientContext> context,
google::cloud::internal::ImmutableOptions options,
google::bigtable::v2::ReadRowsRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result =
stub->AsyncReadRows(cq, std::move(context), std::move(options), request);
std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child;
auto release_fn = [weak = std::move(weak)]() {
auto child = weak.lock();
if (child) child->ReleaseStub();
};
return std::make_unique<
AsyncStreamingReadRpcTracking<google::bigtable::v2::ReadRowsResponse>>(
std::move(result), std::move(release_fn));
}

std::unique_ptr<google::cloud::internal::AsyncStreamingReadRpc<
google::bigtable::v2::SampleRowKeysResponse>>
BigtableRandomTwoLeastUsed::AsyncSampleRowKeys(
google::cloud::CompletionQueue const& cq,
std::shared_ptr<grpc::ClientContext> context,
google::cloud::internal::ImmutableOptions options,
google::bigtable::v2::SampleRowKeysRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->AsyncSampleRowKeys(cq, std::move(context),
std::move(options), request);
std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child;
auto release_fn = [weak = std::move(weak)]() {
auto child = weak.lock();
if (child) child->ReleaseStub();
};
return std::make_unique<AsyncStreamingReadRpcTracking<
google::bigtable::v2::SampleRowKeysResponse>>(std::move(result),
std::move(release_fn));
}

future<StatusOr<google::bigtable::v2::MutateRowResponse>>
BigtableRandomTwoLeastUsed::AsyncMutateRow(
google::cloud::CompletionQueue& cq,
std::shared_ptr<grpc::ClientContext> context,
google::cloud::internal::ImmutableOptions options,
google::bigtable::v2::MutateRowRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result =
stub->AsyncMutateRow(cq, std::move(context), std::move(options), request);
child->ReleaseStub();
return result;
}

std::unique_ptr<google::cloud::internal::AsyncStreamingReadRpc<
google::bigtable::v2::MutateRowsResponse>>
BigtableRandomTwoLeastUsed::AsyncMutateRows(
google::cloud::CompletionQueue const& cq,
std::shared_ptr<grpc::ClientContext> context,
google::cloud::internal::ImmutableOptions options,
google::bigtable::v2::MutateRowsRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->AsyncMutateRows(cq, std::move(context),
std::move(options), request);

std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child;
auto release_fn = [weak = std::move(weak)]() {
auto child = weak.lock();
if (child) child->ReleaseStub();
};

return std::make_unique<
AsyncStreamingReadRpcTracking<google::bigtable::v2::MutateRowsResponse>>(
std::move(result), std::move(release_fn));
}

future<StatusOr<google::bigtable::v2::CheckAndMutateRowResponse>>
BigtableRandomTwoLeastUsed::AsyncCheckAndMutateRow(
google::cloud::CompletionQueue& cq,
std::shared_ptr<grpc::ClientContext> context,
google::cloud::internal::ImmutableOptions options,
google::bigtable::v2::CheckAndMutateRowRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->AsyncCheckAndMutateRow(cq, std::move(context),
std::move(options), request);
child->ReleaseStub();
return result;
}

future<StatusOr<google::bigtable::v2::ReadModifyWriteRowResponse>>
BigtableRandomTwoLeastUsed::AsyncReadModifyWriteRow(
google::cloud::CompletionQueue& cq,
std::shared_ptr<grpc::ClientContext> context,
google::cloud::internal::ImmutableOptions options,
google::bigtable::v2::ReadModifyWriteRowRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->AsyncReadModifyWriteRow(cq, std::move(context),
std::move(options), request);
child->ReleaseStub();
return result;
}

future<StatusOr<google::bigtable::v2::PrepareQueryResponse>>
BigtableRandomTwoLeastUsed::AsyncPrepareQuery(
google::cloud::CompletionQueue& cq,
std::shared_ptr<grpc::ClientContext> context,
google::cloud::internal::ImmutableOptions options,
google::bigtable::v2::PrepareQueryRequest const& request) {
auto child = Child();
auto stub = child->AcquireStub();
auto result = stub->AsyncPrepareQuery(cq, std::move(context),
std::move(options), request);
child->ReleaseStub();
return result;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This file contains a significant amount of duplicated code for handling both unary and streaming RPCs. For instance, the logic to acquire a stub, execute a call, and release the stub is repeated for all unary calls, and a similar pattern exists for streaming calls. This violates the "Don't Repeat Yourself" (DRY) principle and makes the code harder to maintain. The repository style guide also discourages this level of duplication.

Consider refactoring this using helper functions or templates to centralize the logic. For example, you could have a helper for unary calls and another for streaming calls.

References
  1. The repository style guide prefers to factor out duplicated code if it appears 3 or more times in non-test files. (link)

@codecov
Copy link

codecov bot commented Dec 7, 2025

Codecov Report

❌ Patch coverage is 67.82178% with 130 lines in your changes missing coverage. Please review.
✅ Project coverage is 92.91%. Comparing base (826dd38) to head (59d22fb).
⚠️ Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
...gle/cloud/bigtable/internal/dynamic_channel_pool.h 47.10% 128 Missing ⚠️
...e/cloud/bigtable/internal/bigtable_stub_factory.cc 95.91% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #15819      +/-   ##
==========================================
- Coverage   92.95%   92.91%   -0.05%     
==========================================
  Files        2458     2460       +2     
  Lines      227589   227977     +388     
==========================================
+ Hits       211547   211814     +267     
- Misses      16042    16163     +121     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@scotthart scotthart force-pushed the bigtable_dynamic_channel_pool_prototype branch from 56534d1 to 97d4f82 Compare December 8, 2025 23:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant