- Notifications
You must be signed in to change notification settings - Fork433
Bigtable dynamic channel pool prototype#15819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:main
Are you sure you want to change the base?
Bigtable dynamic channel pool prototype#15819
Uh oh!
There was an error while loading.Please reload this page.
Conversation
Summary of ChangesHello@scotthart, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request prototypes a dynamic channel pooling system for Bigtable clients, designed to enhance load balancing and resource management. It introduces a "random two least used" strategy for selecting gRPC channels, aiming to distribute RPCs more efficiently across available connections. The changes involve new internal components for channel management, modifications to the stub factory to integrate this new strategy, and extensive debugging output to monitor its behavior. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on ourdocumentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on@gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign uphere. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with theGemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Code Review
This pull request introduces a prototype for a dynamic channel pool for Bigtable, using a "power of two choices" load balancing strategy. This is a valuable addition. The overall structure is sound, but I've identified several critical issues related to thread safety and potential crashes that need to be addressed. Additionally, there are numerous debugging statements that should be removed, and I've suggested refactoring to reduce code duplication, in line with the repository's style guide.
google/cloud/internal/channel_pool.h Outdated
| classStubWrapper { | ||
| public: | ||
| explicitStubWrapper(std::shared_ptr<T> stub) | ||
| : stub_(std::move(stub)), outstanding_rpcs_(0) {} | ||
| intoutstanding_rpcs(std::unique_lock<std::mutex>const&)const { | ||
| return outstanding_rpcs_; | ||
| } | ||
| std::shared_ptr<T>AcquireStub() { | ||
| std::unique_lock<std::mutex>lk(mu_); | ||
| ++outstanding_rpcs_; | ||
| return stub_; | ||
| } | ||
| voidReleaseStub() { | ||
| std::unique_lock<std::mutex>lk(mu_); | ||
| --outstanding_rpcs_; | ||
| } | ||
| private: | ||
| mutable std::mutex mu_; | ||
| std::shared_ptr<T> stub_; | ||
| int outstanding_rpcs_; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Theoutstanding_rpcs_ member is accessed in a non-thread-safe manner. It is read inoutstanding_rpcs() without a lock, while being modified inAcquireStub() andReleaseStub() under a lock. This creates a data race.
I recommend makingoutstanding_rpcs_ astd::atomic<int> to ensure thread-safe operations without needing a mutex for this counter. This would also simplify theStubWrapper class by removing its mutex.
classStubWrapper {public:explicitStubWrapper(std::shared_ptr<T>stub) :stub_(std::move(stub)) {}intoutstanding_rpcs()const {returnoutstanding_rpcs_.load(std::memory_order_relaxed); }std::shared_ptr<T>AcquireStub() {++outstanding_rpcs_;returnstub_; }voidReleaseStub() {--outstanding_rpcs_; }private:std::shared_ptr<T>stub_;std::atomic<int>outstanding_rpcs_{0};};
google/cloud/internal/channel_pool.h Outdated
| voidCheckPoolChannelHealth(std::unique_lock<std::mutex>const&) { | ||
| auto average_rpc_per_channel = | ||
| std::accumulate(channels_.begin(), channels_.end(), | ||
| [](std::shared_ptr<internal::StubWrapper<T>>const& s) { | ||
| return s->outstanding_rpcs(); | ||
| }) / | ||
| channels_.size(); | ||
| if (average_rpc_per_channel < | ||
| sizing_policy_.minimum_average_outstanding_rpcs_per_channel) { | ||
| // TODO(sdhart): Is there a downside to always removing the most recently | ||
| // created channel? | ||
| decommissioned_channels_.push_back(std::move(channels_.back())); | ||
| channels_.pop_back(); | ||
| ScheduleRemoveChannel(); | ||
| } | ||
| if (average_rpc_per_channel > | ||
| sizing_policy_.maximum_average_outstanding_rpcs_per_channel) { | ||
| // Channel/stub creation is expensive, instead of making the current RPC | ||
| // wait on this, use an existing channel right now, and schedule a channel | ||
| // to be added. | ||
| ScheduleAddChannel(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This function has several critical issues:
- The call to
std::accumulateis incorrect. It's missing the initial value and uses a unary operation instead of the required binary operation. This will not compile. - There is a potential division-by-zero if
channels_is empty. The code should guard against this. - The call to
s->outstanding_rpcs()is not thread-safe, as pointed out in another comment. This can lead to data races.
I've provided a suggestion that fixes thestd::accumulate usage and the division-by-zero. This assumes theoutstanding_rpcs() method is made thread-safe.
voidCheckPoolChannelHealth(std::unique_lock<std::mutex>const&) {if (channels_.empty())return; autoconsttotal_rpcs=std::accumulate(channels_.begin(),channels_.end(),std::size_t{0}, [](std::size_tcurrent,std::shared_ptr<internal::StubWrapper<T>>const&s) {returncurrent+s->outstanding_rpcs(); }); autoconstaverage_rpc_per_channel=total_rpcs /channels_.size();if (average_rpc_per_channel<sizing_policy_.minimum_average_outstanding_rpcs_per_channel) {// TODO(sdhart): Is there a downside to always removing the most recently// created channel?decommissioned_channels_.push_back(std::move(channels_.back()));channels_.pop_back();ScheduleRemoveChannel(); }if (average_rpc_per_channel>sizing_policy_.maximum_average_outstanding_rpcs_per_channel) {// Channel/stub creation is expensive, instead of making the current RPC// wait on this, use an existing channel right now, and schedule a channel// to be added.ScheduleAddChannel(); } }
google/cloud/internal/channel_pool.h Outdated
| std::shared_ptr<StubWrapper<T>>GetChannelRandomTwoLeastUsed() { | ||
| std::cout << __PRETTY_FUNCTION__ << std::endl; | ||
| std::unique_lock<std::mutex>lk(mu_); | ||
| std::cout << __PRETTY_FUNCTION__ <<": channels_size()=" << channels_.size() | ||
| << std::endl; | ||
| // TODO: check if resize is needed. | ||
| std::vector<std::size_t>indices(channels_.size()); | ||
| // TODO(sdhart): Maybe use iota on iterators instead of indices | ||
| std::iota(indices.begin(), indices.end(),0); | ||
| std::shuffle(indices.begin(), indices.end(), rng_); | ||
| std::shared_ptr<StubWrapper<T>> channel_1 = channels_[indices[0]]; | ||
| std::shared_ptr<StubWrapper<T>> channel_2 = channels_[indices[1]]; | ||
| return channel_1->outstanding_rpcs(lk) < channel_2->outstanding_rpcs(lk) | ||
| ? channel_1 | ||
| : channel_2; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This function accesseschannels_[indices[1]] without verifying that the pool contains at least two channels. Ifchannels_.size() is less than 2, this will lead to an out-of-bounds access and a program crash. Please add checks to handle cases where the pool size is 0 or 1.
std::shared_ptr<StubWrapper<T>>GetChannelRandomTwoLeastUsed() {std::cout <<__PRETTY_FUNCTION__ <<std::endl;std::unique_lock<std::mutex>lk(mu_);std::cout <<__PRETTY_FUNCTION__ <<": channels_size()=" <<channels_.size() <<std::endl;// TODO: check if resize is needed.if (channels_.empty())returnnullptr;if (channels_.size()==1)returnchannels_[0];std::vector<std::size_t>indices(channels_.size());// TODO(sdhart): Maybe use iota on iterators instead of indicesstd::iota(indices.begin(),indices.end(),0);std::shuffle(indices.begin(),indices.end(),rng_);std::shared_ptr<StubWrapper<T>>channel_1=channels_[indices[0]];std::shared_ptr<StubWrapper<T>>channel_2=channels_[indices[1]];returnchannel_1->outstanding_rpcs(lk)<channel_2->outstanding_rpcs(lk) ?channel_1 :channel_2; }
| } | ||
| voidTableIntegrationTest::SetUp() { | ||
| std::cout << __PRETTY_FUNCTION__ << std::endl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
| DefaultBigtableStub::ReadRows( | ||
| std::shared_ptr<grpc::ClientContext> context, Optionsconst&, | ||
| google::bigtable::v2::ReadRowsRequestconst& request) { | ||
| std::cout << __PRETTY_FUNCTION__ << std::endl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
| private: | ||
| std::shared_ptr<internal::StubWrapper<BigtableStub>>Child(); | ||
| // std::mutex mu_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
google/cloud/internal/channel_pool.h Outdated
| // std::shared_ptr<StubWrapper<T>> GetChannel( | ||
| // std::unique_lock<std::mutex> const&) { | ||
| // // TODO: check for empty | ||
| // return channels_[0]; | ||
| // } | ||
| // | ||
| // std::shared_ptr<StubWrapper<T>> GetChannel( | ||
| // std::unique_lock<std::mutex> const&, std::size_t index) { | ||
| // // TODO: bounds check | ||
| // return channels_[index]; | ||
| // } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
| // std::unique_lock<std::mutex> lk(mu_); | ||
| // std::vector<std::size_t> indices(pool_->size(lk) - 1); | ||
| // // TODO(sdhart): Maybe use iota on iterators instead of indices | ||
| // std::iota(indices.begin(), indices.end(), 0); | ||
| // std::shuffle(indices.begin(), indices.end(), rng_); | ||
| // auto channel_1 = pool_->GetChannel(lk, indices[0]); | ||
| // auto channel_2 = pool_->GetChannel(lk, indices[1]); | ||
| // | ||
| // return channel_1->outstanding_rpcs(lk) < channel_2->outstanding_rpcs(lk) | ||
| // ? channel_1 | ||
| // : channel_2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
google/cloud/internal/channel_pool.h Outdated
| std::sort(decommissioned_channels_.begin(), decommissioned_channels_.end(), | ||
| [](std::shared_ptr<StubWrapper<T>>const& a, | ||
| std::shared_ptr<StubWrapper<T>> b) { | ||
| return a->outstanding_rpcs() > b->outstanding_rpcs(); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The lambda forstd::sort takes its second argumentb by value, which causes an unnecessary copy of astd::shared_ptr. It should be taken byconst& to avoid this overhead.
[](std::shared_ptr<StubWrapper<T>>const&a,std::shared_ptr<StubWrapper<T>>const&b) {returna->outstanding_rpcs()>b->outstanding_rpcs(); });
| std::unique_ptr<google::cloud::internal::StreamingReadRpc< | ||
| google::bigtable::v2::ReadRowsResponse>> | ||
| BigtableRandomTwoLeastUsed::ReadRows( | ||
| std::shared_ptr<grpc::ClientContext> context, Optionsconst& options, | ||
| google::bigtable::v2::ReadRowsRequestconst& request) { | ||
| std::cout << __PRETTY_FUNCTION__ << std::endl; | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->ReadRows(std::move(context), options, request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak =std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| StreamingReadRpcTracking<google::bigtable::v2::ReadRowsResponse>>( | ||
| std::move(result),std::move(release_fn)); | ||
| } | ||
| std::unique_ptr<google::cloud::internal::StreamingReadRpc< | ||
| google::bigtable::v2::SampleRowKeysResponse>> | ||
| BigtableRandomTwoLeastUsed::SampleRowKeys( | ||
| std::shared_ptr<grpc::ClientContext> context, Optionsconst& options, | ||
| google::bigtable::v2::SampleRowKeysRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->SampleRowKeys(std::move(context), options, request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak =std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| StreamingReadRpcTracking<google::bigtable::v2::SampleRowKeysResponse>>( | ||
| std::move(result),std::move(release_fn)); | ||
| } | ||
| StatusOr<google::bigtable::v2::MutateRowResponse> | ||
| BigtableRandomTwoLeastUsed::MutateRow( | ||
| grpc::ClientContext& context, Optionsconst& options, | ||
| google::bigtable::v2::MutateRowRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->MutateRow(context, options, request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
| std::unique_ptr<google::cloud::internal::StreamingReadRpc< | ||
| google::bigtable::v2::MutateRowsResponse>> | ||
| BigtableRandomTwoLeastUsed::MutateRows( | ||
| std::shared_ptr<grpc::ClientContext> context, Optionsconst& options, | ||
| google::bigtable::v2::MutateRowsRequestconst& request) { | ||
| std::cout << __PRETTY_FUNCTION__ << std::endl; | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->MutateRows(std::move(context), options, request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak =std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| StreamingReadRpcTracking<google::bigtable::v2::MutateRowsResponse>>( | ||
| std::move(result),std::move(release_fn)); | ||
| } | ||
| StatusOr<google::bigtable::v2::CheckAndMutateRowResponse> | ||
| BigtableRandomTwoLeastUsed::CheckAndMutateRow( | ||
| grpc::ClientContext& context, Optionsconst& options, | ||
| google::bigtable::v2::CheckAndMutateRowRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->CheckAndMutateRow(context, options, request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
| StatusOr<google::bigtable::v2::PingAndWarmResponse> | ||
| BigtableRandomTwoLeastUsed::PingAndWarm( | ||
| grpc::ClientContext& context, Optionsconst& options, | ||
| google::bigtable::v2::PingAndWarmRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->PingAndWarm(context, options, request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
| StatusOr<google::bigtable::v2::ReadModifyWriteRowResponse> | ||
| BigtableRandomTwoLeastUsed::ReadModifyWriteRow( | ||
| grpc::ClientContext& context, Optionsconst& options, | ||
| google::bigtable::v2::ReadModifyWriteRowRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->ReadModifyWriteRow(context, options, request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
| StatusOr<google::bigtable::v2::PrepareQueryResponse> | ||
| BigtableRandomTwoLeastUsed::PrepareQuery( | ||
| grpc::ClientContext& context, Optionsconst& options, | ||
| google::bigtable::v2::PrepareQueryRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->PrepareQuery(context, options, request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
| std::unique_ptr<google::cloud::internal::StreamingReadRpc< | ||
| google::bigtable::v2::ExecuteQueryResponse>> | ||
| BigtableRandomTwoLeastUsed::ExecuteQuery( | ||
| std::shared_ptr<grpc::ClientContext> context, Optionsconst& options, | ||
| google::bigtable::v2::ExecuteQueryRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->ExecuteQuery(std::move(context), options, request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak =std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| StreamingReadRpcTracking<google::bigtable::v2::ExecuteQueryResponse>>( | ||
| std::move(result),std::move(release_fn)); | ||
| } | ||
| std::unique_ptr<google::cloud::internal::AsyncStreamingReadRpc< | ||
| google::bigtable::v2::ReadRowsResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncReadRows( | ||
| google::cloud::CompletionQueueconst& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::ReadRowsRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = | ||
| stub->AsyncReadRows(cq,std::move(context),std::move(options), request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak =std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| AsyncStreamingReadRpcTracking<google::bigtable::v2::ReadRowsResponse>>( | ||
| std::move(result),std::move(release_fn)); | ||
| } | ||
| std::unique_ptr<google::cloud::internal::AsyncStreamingReadRpc< | ||
| google::bigtable::v2::SampleRowKeysResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncSampleRowKeys( | ||
| google::cloud::CompletionQueueconst& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::SampleRowKeysRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->AsyncSampleRowKeys(cq,std::move(context), | ||
| std::move(options), request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak =std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique<AsyncStreamingReadRpcTracking< | ||
| google::bigtable::v2::SampleRowKeysResponse>>(std::move(result), | ||
| std::move(release_fn)); | ||
| } | ||
| future<StatusOr<google::bigtable::v2::MutateRowResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncMutateRow( | ||
| google::cloud::CompletionQueue& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::MutateRowRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = | ||
| stub->AsyncMutateRow(cq,std::move(context),std::move(options), request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
| std::unique_ptr<google::cloud::internal::AsyncStreamingReadRpc< | ||
| google::bigtable::v2::MutateRowsResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncMutateRows( | ||
| google::cloud::CompletionQueueconst& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::MutateRowsRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->AsyncMutateRows(cq,std::move(context), | ||
| std::move(options), request); | ||
| std::weak_ptr<internal::StubWrapper<BigtableStub>> weak = child; | ||
| auto release_fn = [weak =std::move(weak)]() { | ||
| auto child = weak.lock(); | ||
| if (child) child->ReleaseStub(); | ||
| }; | ||
| return std::make_unique< | ||
| AsyncStreamingReadRpcTracking<google::bigtable::v2::MutateRowsResponse>>( | ||
| std::move(result),std::move(release_fn)); | ||
| } | ||
| future<StatusOr<google::bigtable::v2::CheckAndMutateRowResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncCheckAndMutateRow( | ||
| google::cloud::CompletionQueue& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::CheckAndMutateRowRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->AsyncCheckAndMutateRow(cq,std::move(context), | ||
| std::move(options), request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
| future<StatusOr<google::bigtable::v2::ReadModifyWriteRowResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncReadModifyWriteRow( | ||
| google::cloud::CompletionQueue& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::ReadModifyWriteRowRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->AsyncReadModifyWriteRow(cq,std::move(context), | ||
| std::move(options), request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } | ||
| future<StatusOr<google::bigtable::v2::PrepareQueryResponse>> | ||
| BigtableRandomTwoLeastUsed::AsyncPrepareQuery( | ||
| google::cloud::CompletionQueue& cq, | ||
| std::shared_ptr<grpc::ClientContext> context, | ||
| google::cloud::internal::ImmutableOptions options, | ||
| google::bigtable::v2::PrepareQueryRequestconst& request) { | ||
| auto child =Child(); | ||
| auto stub = child->AcquireStub(); | ||
| auto result = stub->AsyncPrepareQuery(cq,std::move(context), | ||
| std::move(options), request); | ||
| child->ReleaseStub(); | ||
| return result; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This file contains a significant amount of duplicated code for handling both unary and streaming RPCs. For instance, the logic to acquire a stub, execute a call, and release the stub is repeated for all unary calls, and a similar pattern exists for streaming calls. This violates the "Don't Repeat Yourself" (DRY) principle and makes the code harder to maintain. The repository style guide also discourages this level of duplication.
Consider refactoring this using helper functions or templates to centralize the logic. For example, you could have a helper for unary calls and another for streaming calls.
References
- The repository style guide prefers to factor out duplicated code if it appears 3 or more times in non-test files.(link)
codecovbot commentedDec 7, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@## main #15819 +/- ##==========================================- Coverage 92.95% 92.91% -0.05%========================================== Files 2458 2460 +2 Lines 227589 227977 +388 ==========================================+ Hits 211547 211814 +267- Misses 16042 16163 +121 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
56534d1 to97d4f82Compare
No description provided.