-
Notifications
You must be signed in to change notification settings - Fork 432
Bigtable dynamic channel pool prototype #15819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Bigtable dynamic channel pool prototype #15819
Conversation
Summary of ChangesHello @scotthart, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request prototypes a dynamic channel pooling system for Bigtable clients, designed to enhance load balancing and resource management. It introduces a "random two least used" strategy for selecting gRPC channels, aiming to distribute RPCs more efficiently across available connections. The changes involve new internal components for channel management, modifications to the stub factory to integrate this new strategy, and extensive debugging output to monitor its behavior. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a prototype for a dynamic channel pool for Bigtable, using a "power of two choices" load balancing strategy. This is a valuable addition. The overall structure is sound, but I've identified several critical issues related to thread safety and potential crashes that need to be addressed. Additionally, there are numerous debugging statements that should be removed, and I've suggested refactoring to reduce code duplication, in line with the repository's style guide.
google/cloud/internal/channel_pool.h
Outdated
| class StubWrapper { | ||
| public: | ||
| explicit StubWrapper(std::shared_ptr<T> stub) | ||
| : stub_(std::move(stub)), outstanding_rpcs_(0) {} | ||
|
|
||
| int outstanding_rpcs(std::unique_lock<std::mutex> const&) const { | ||
| return outstanding_rpcs_; | ||
| } | ||
|
|
||
| std::shared_ptr<T> AcquireStub() { | ||
| std::unique_lock<std::mutex> lk(mu_); | ||
| ++outstanding_rpcs_; | ||
| return stub_; | ||
| } | ||
|
|
||
| void ReleaseStub() { | ||
| std::unique_lock<std::mutex> lk(mu_); | ||
| --outstanding_rpcs_; | ||
| } | ||
|
|
||
| private: | ||
| mutable std::mutex mu_; | ||
| std::shared_ptr<T> stub_; | ||
| int outstanding_rpcs_; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The outstanding_rpcs_ member is accessed in a non-thread-safe manner. It is read in outstanding_rpcs() without a lock, while being modified in AcquireStub() and ReleaseStub() under a lock. This creates a data race.
I recommend making outstanding_rpcs_ a std::atomic<int> to ensure thread-safe operations without needing a mutex for this counter. This would also simplify the StubWrapper class by removing its mutex.
class StubWrapper {
public:
explicit StubWrapper(std::shared_ptr<T> stub)
: stub_(std::move(stub)) {}
int outstanding_rpcs() const {
return outstanding_rpcs_.load(std::memory_order_relaxed);
}
std::shared_ptr<T> AcquireStub() {
++outstanding_rpcs_;
return stub_;
}
void ReleaseStub() {
--outstanding_rpcs_;
}
private:
std::shared_ptr<T> stub_;
std::atomic<int> outstanding_rpcs_{0};
};
google/cloud/internal/channel_pool.h
Outdated
| void CheckPoolChannelHealth(std::unique_lock<std::mutex> const&) { | ||
| auto average_rpc_per_channel = | ||
| std::accumulate(channels_.begin(), channels_.end(), | ||
| [](std::shared_ptr<internal::StubWrapper<T>> const& s) { | ||
| return s->outstanding_rpcs(); | ||
| }) / | ||
| channels_.size(); | ||
| if (average_rpc_per_channel < | ||
| sizing_policy_.minimum_average_outstanding_rpcs_per_channel) { | ||
| // TODO(sdhart): Is there a downside to always removing the most recently | ||
| // created channel? | ||
| decommissioned_channels_.push_back(std::move(channels_.back())); | ||
| channels_.pop_back(); | ||
| ScheduleRemoveChannel(); | ||
| } | ||
| if (average_rpc_per_channel > | ||
| sizing_policy_.maximum_average_outstanding_rpcs_per_channel) { | ||
| // Channel/stub creation is expensive, instead of making the current RPC | ||
| // wait on this, use an existing channel right now, and schedule a channel | ||
| // to be added. | ||
| ScheduleAddChannel(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function has several critical issues:
- The call to
std::accumulateis incorrect. It's missing the initial value and uses a unary operation instead of the required binary operation. This will not compile. - There is a potential division-by-zero if
channels_is empty. The code should guard against this. - The call to
s->outstanding_rpcs()is not thread-safe, as pointed out in another comment. This can lead to data races.
I've provided a suggestion that fixes the std::accumulate usage and the division-by-zero. This assumes the outstanding_rpcs() method is made thread-safe.
void CheckPoolChannelHealth(std::unique_lock<std::mutex> const&) {
if (channels_.empty()) return;
auto const total_rpcs =
std::accumulate(channels_.begin(), channels_.end(), std::size_t{0},
[](std::size_t current,
std::shared_ptr<internal::StubWrapper<T>> const& s) {
return current + s->outstanding_rpcs();
});
auto const average_rpc_per_channel = total_rpcs / channels_.size();
if (average_rpc_per_channel <
sizing_policy_.minimum_average_outstanding_rpcs_per_channel) {
// TODO(sdhart): Is there a downside to always removing the most recently
// created channel?
decommissioned_channels_.push_back(std::move(channels_.back()));
channels_.pop_back();
ScheduleRemoveChannel();
}
if (average_rpc_per_channel >
sizing_policy_.maximum_average_outstanding_rpcs_per_channel) {
// Channel/stub creation is expensive, instead of making the current RPC
// wait on this, use an existing channel right now, and schedule a channel
// to be added.
ScheduleAddChannel();
}
}
google/cloud/internal/channel_pool.h
Outdated
| std::shared_ptr<StubWrapper<T>> GetChannelRandomTwoLeastUsed() { | ||
| std::cout << __PRETTY_FUNCTION__ << std::endl; | ||
| std::unique_lock<std::mutex> lk(mu_); | ||
|
|
||
| std::cout << __PRETTY_FUNCTION__ << ": channels_size()=" << channels_.size() | ||
| << std::endl; | ||
| // TODO: check if resize is needed. | ||
|
|
||
| std::vector<std::size_t> indices(channels_.size()); | ||
| // TODO(sdhart): Maybe use iota on iterators instead of indices | ||
| std::iota(indices.begin(), indices.end(), 0); | ||
| std::shuffle(indices.begin(), indices.end(), rng_); | ||
|
|
||
| std::shared_ptr<StubWrapper<T>> channel_1 = channels_[indices[0]]; | ||
| std::shared_ptr<StubWrapper<T>> channel_2 = channels_[indices[1]]; | ||
|
|
||
| return channel_1->outstanding_rpcs(lk) < channel_2->outstanding_rpcs(lk) | ||
| ? channel_1 | ||
| : channel_2; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function accesses channels_[indices[1]] without verifying that the pool contains at least two channels. If channels_.size() is less than 2, this will lead to an out-of-bounds access and a program crash. Please add checks to handle cases where the pool size is 0 or 1.
std::shared_ptr<StubWrapper<T>> GetChannelRandomTwoLeastUsed() {
std::cout << __PRETTY_FUNCTION__ << std::endl;
std::unique_lock<std::mutex> lk(mu_);
std::cout << __PRETTY_FUNCTION__ << ": channels_size()=" << channels_.size()
<< std::endl;
// TODO: check if resize is needed.
if (channels_.empty()) return nullptr;
if (channels_.size() == 1) return channels_[0];
std::vector<std::size_t> indices(channels_.size());
// TODO(sdhart): Maybe use iota on iterators instead of indices
std::iota(indices.begin(), indices.end(), 0);
std::shuffle(indices.begin(), indices.end(), rng_);
std::shared_ptr<StubWrapper<T>> channel_1 = channels_[indices[0]];
std::shared_ptr<StubWrapper<T>> channel_2 = channels_[indices[1]];
return channel_1->outstanding_rpcs(lk) < channel_2->outstanding_rpcs(lk)
? channel_1
: channel_2;
}| } | ||
|
|
||
| void TableIntegrationTest::SetUp() { | ||
| std::cout << __PRETTY_FUNCTION__ << std::endl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| DefaultBigtableStub::ReadRows( | ||
| std::shared_ptr<grpc::ClientContext> context, Options const&, | ||
| google::bigtable::v2::ReadRowsRequest const& request) { | ||
| std::cout << __PRETTY_FUNCTION__ << std::endl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| private: | ||
| std::shared_ptr<internal::StubWrapper<BigtableStub>> Child(); | ||
|
|
||
| // std::mutex mu_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
google/cloud/internal/channel_pool.h
Outdated
| // std::shared_ptr<StubWrapper<T>> GetChannel( | ||
| // std::unique_lock<std::mutex> const&) { | ||
| // // TODO: check for empty | ||
| // return channels_[0]; | ||
| // } | ||
| // | ||
| // std::shared_ptr<StubWrapper<T>> GetChannel( | ||
| // std::unique_lock<std::mutex> const&, std::size_t index) { | ||
| // // TODO: bounds check | ||
| // return channels_[index]; | ||
| // } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // std::unique_lock<std::mutex> lk(mu_); | ||
| // std::vector<std::size_t> indices(pool_->size(lk) - 1); | ||
| // // TODO(sdhart): Maybe use iota on iterators instead of indices | ||
| // std::iota(indices.begin(), indices.end(), 0); | ||
| // std::shuffle(indices.begin(), indices.end(), rng_); | ||
| // auto channel_1 = pool_->GetChannel(lk, indices[0]); | ||
| // auto channel_2 = pool_->GetChannel(lk, indices[1]); | ||
| // | ||
| // return channel_1->outstanding_rpcs(lk) < channel_2->outstanding_rpcs(lk) | ||
| // ? channel_1 | ||
| // : channel_2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
google/cloud/internal/channel_pool.h
Outdated
| std::sort(decommissioned_channels_.begin(), decommissioned_channels_.end(), | ||
| [](std::shared_ptr<StubWrapper<T>> const& a, | ||
| std::shared_ptr<StubWrapper<T>> b) { | ||
| return a->outstanding_rpcs() > b->outstanding_rpcs(); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lambda for std::sort takes its second argument b by value, which causes an unnecessary copy of a std::shared_ptr. It should be taken by const& to avoid this overhead.
[](std::shared_ptr<StubWrapper<T>> const& a,
std::shared_ptr<StubWrapper<T>> const& b) {
return a->outstanding_rpcs() > b->outstanding_rpcs();
});| std::unique_ptr<google::cloud::internal::StreamingReadRpc< | ||
| google::bigtable::v2::ReadRowsResponse>> | ||
| BigtableRandomTwoLeastUsed::ReadRows( | ||
| std::shared_ptr<grpc::ClientContext> context, Options const& options, | ||
| google::bigtable::v2::ReadRowsRequest const& request) { | ||
| std::cout << __PRETTY_FUNCTION__ << std::endl; | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->ReadRows(std::move(context), options, request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak = std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| StreamingReadRpcTracking<google::bigtable::v2::ReadRowsResponse>>( | ||
| std::move(result), std::move(release_fn)); | ||
| } | ||
|
|
||
| std::unique_ptr<google::cloud::internal::StreamingReadRpc< | ||
| google::bigtable::v2::SampleRowKeysResponse>> | ||
| BigtableRandomTwoLeastUsed::SampleRowKeys( | ||
| std::shared_ptr<grpc::ClientContext> context, Options const& options, | ||
| google::bigtable::v2::SampleRowKeysRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->SampleRowKeys(std::move(context), options, request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak = std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| StreamingReadRpcTracking<google::bigtable::v2::SampleRowKeysResponse>>( | ||
| std::move(result), std::move(release_fn)); | ||
| } | ||
|
|
||
| StatusOr<google::bigtable::v2::MutateRowResponse> | ||
| BigtableRandomTwoLeastUsed::MutateRow( | ||
| grpc::ClientContext& context, Options const& options, | ||
| google::bigtable::v2::MutateRowRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->MutateRow(context, options, request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
|
|
||
| std::unique_ptr<google::cloud::internal::StreamingReadRpc< | ||
| google::bigtable::v2::MutateRowsResponse>> | ||
| BigtableRandomTwoLeastUsed::MutateRows( | ||
| std::shared_ptr<grpc::ClientContext> context, Options const& options, | ||
| google::bigtable::v2::MutateRowsRequest const& request) { | ||
| std::cout << __PRETTY_FUNCTION__ << std::endl; | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->MutateRows(std::move(context), options, request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak = std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| StreamingReadRpcTracking<google::bigtable::v2::MutateRowsResponse>>( | ||
| std::move(result), std::move(release_fn)); | ||
| } | ||
|
|
||
| StatusOr<google::bigtable::v2::CheckAndMutateRowResponse> | ||
| BigtableRandomTwoLeastUsed::CheckAndMutateRow( | ||
| grpc::ClientContext& context, Options const& options, | ||
| google::bigtable::v2::CheckAndMutateRowRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->CheckAndMutateRow(context, options, request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
|
|
||
| StatusOr<google::bigtable::v2::PingAndWarmResponse> | ||
| BigtableRandomTwoLeastUsed::PingAndWarm( | ||
| grpc::ClientContext& context, Options const& options, | ||
| google::bigtable::v2::PingAndWarmRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->PingAndWarm(context, options, request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
|
|
||
| StatusOr<google::bigtable::v2::ReadModifyWriteRowResponse> | ||
| BigtableRandomTwoLeastUsed::ReadModifyWriteRow( | ||
| grpc::ClientContext& context, Options const& options, | ||
| google::bigtable::v2::ReadModifyWriteRowRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->ReadModifyWriteRow(context, options, request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
|
|
||
| StatusOr<google::bigtable::v2::PrepareQueryResponse> | ||
| BigtableRandomTwoLeastUsed::PrepareQuery( | ||
| grpc::ClientContext& context, Options const& options, | ||
| google::bigtable::v2::PrepareQueryRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->PrepareQuery(context, options, request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
|
|
||
| std::unique_ptr<google::cloud::internal::StreamingReadRpc< | ||
| google::bigtable::v2::ExecuteQueryResponse>> | ||
| BigtableRandomTwoLeastUsed::ExecuteQuery( | ||
| std::shared_ptr<grpc::ClientContext> context, Options const& options, | ||
| google::bigtable::v2::ExecuteQueryRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->ExecuteQuery(std::move(context), options, request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak = std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| StreamingReadRpcTracking<google::bigtable::v2::ExecuteQueryResponse>>( | ||
| std::move(result), std::move(release_fn)); | ||
| } | ||
|
|
||
| std::unique_ptr<google::cloud::internal::AsyncStreamingReadRpc< | ||
| google::bigtable::v2::ReadRowsResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncReadRows( | ||
| google::cloud::CompletionQueue const& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::ReadRowsRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = | ||
| stub->AsyncReadRows(cq, std::move(context), std::move(options), request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak = std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| AsyncStreamingReadRpcTracking<google::bigtable::v2::ReadRowsResponse>>( | ||
| std::move(result), std::move(release_fn)); | ||
| } | ||
|
|
||
| std::unique_ptr<google::cloud::internal::AsyncStreamingReadRpc< | ||
| google::bigtable::v2::SampleRowKeysResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncSampleRowKeys( | ||
| google::cloud::CompletionQueue const& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::SampleRowKeysRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->AsyncSampleRowKeys(cq, std::move(context), | ||
| std::move(options), request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak = std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique<AsyncStreamingReadRpcTracking< | ||
| google::bigtable::v2::SampleRowKeysResponse>>(std::move(result), | ||
| std::move(release_fn)); | ||
| } | ||
|
|
||
| future<StatusOr<google::bigtable::v2::MutateRowResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncMutateRow( | ||
| google::cloud::CompletionQueue& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::MutateRowRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = | ||
| stub->AsyncMutateRow(cq, std::move(context), std::move(options), request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
|
|
||
| std::unique_ptr<google::cloud::internal::AsyncStreamingReadRpc< | ||
| google::bigtable::v2::MutateRowsResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncMutateRows( | ||
| google::cloud::CompletionQueue const& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::MutateRowsRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->AsyncMutateRows(cq, std::move(context), | ||
| std::move(options), request); | ||
|
|
||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak = std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
|
|
||
| return std::make_unique< | ||
| AsyncStreamingReadRpcTracking<google::bigtable::v2::MutateRowsResponse>>( | ||
| std::move(result), std::move(release_fn)); | ||
| } | ||
|
|
||
| future<StatusOr<google::bigtable::v2::CheckAndMutateRowResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncCheckAndMutateRow( | ||
| google::cloud::CompletionQueue& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::CheckAndMutateRowRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->AsyncCheckAndMutateRow(cq, std::move(context), | ||
| std::move(options), request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
|
|
||
| future<StatusOr<google::bigtable::v2::ReadModifyWriteRowResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncReadModifyWriteRow( | ||
| google::cloud::CompletionQueue& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::ReadModifyWriteRowRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->AsyncReadModifyWriteRow(cq, std::move(context), | ||
| std::move(options), request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
|
|
||
| future<StatusOr<google::bigtable::v2::PrepareQueryResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncPrepareQuery( | ||
| google::cloud::CompletionQueue& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::PrepareQueryRequest const& request) { | ||
| auto child = Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->AsyncPrepareQuery(cq, std::move(context), | ||
| std::move(options), request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file contains a significant amount of duplicated code for handling both unary and streaming RPCs. For instance, the logic to acquire a stub, execute a call, and release the stub is repeated for all unary calls, and a similar pattern exists for streaming calls. This violates the "Don't Repeat Yourself" (DRY) principle and makes the code harder to maintain. The repository style guide also discourages this level of duplication.
Consider refactoring this using helper functions or templates to centralize the logic. For example, you could have a helper for unary calls and another for streaming calls.
References
- The repository style guide prefers to factor out duplicated code if it appears 3 or more times in non-test files. (link)
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #15819 +/- ##
==========================================
- Coverage 92.95% 92.91% -0.05%
==========================================
Files 2458 2460 +2
Lines 227589 227977 +388
==========================================
+ Hits 211547 211814 +267
- Misses 16042 16163 +121 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
56534d1 to
97d4f82
Compare
No description provided.