pub struct BigtableClient<T> { /* private fields */ }
Expand description
Service for reading from and writing to existing Bigtable tables.
Implementations§
Source§impl<T> BigtableClient<T>where
T: GrpcService<BoxBody>,
T::Error: Into<StdError>,
T::ResponseBody: Body<Data = Bytes> + Send + 'static,
<T::ResponseBody as Body>::Error: Into<StdError> + Send,
impl<T> BigtableClient<T>where
T: GrpcService<BoxBody>,
T::Error: Into<StdError>,
T::ResponseBody: Body<Data = Bytes> + Send + 'static,
<T::ResponseBody as Body>::Error: Into<StdError> + Send,
pub fn new(inner: T) -> Self
pub fn with_origin(inner: T, origin: Uri) -> Self
pub fn with_interceptor<F>(
inner: T,
interceptor: F,
) -> BigtableClient<InterceptedService<T, F>>where
F: Interceptor,
T::ResponseBody: Default,
T: Service<Request<BoxBody>, Response = Response<<T as GrpcService<BoxBody>>::ResponseBody>>,
<T as Service<Request<BoxBody>>>::Error: Into<StdError> + Send + Sync,
Sourcepub fn send_compressed(self, encoding: CompressionEncoding) -> Self
pub fn send_compressed(self, encoding: CompressionEncoding) -> Self
Compress requests with the given encoding.
This requires the server to support it otherwise it might respond with an error.
Sourcepub fn accept_compressed(self, encoding: CompressionEncoding) -> Self
pub fn accept_compressed(self, encoding: CompressionEncoding) -> Self
Enable decompressing responses.
Sourcepub fn max_decoding_message_size(self, limit: usize) -> Self
pub fn max_decoding_message_size(self, limit: usize) -> Self
Limits the maximum size of a decoded message.
Default: 4MB
Sourcepub fn max_encoding_message_size(self, limit: usize) -> Self
pub fn max_encoding_message_size(self, limit: usize) -> Self
Limits the maximum size of an encoded message.
Default: usize::MAX
Sourcepub async fn read_rows(
&mut self,
request: impl IntoRequest<ReadRowsRequest>,
) -> Result<Response<Streaming<ReadRowsResponse>>, Status>
pub async fn read_rows( &mut self, request: impl IntoRequest<ReadRowsRequest>, ) -> Result<Response<Streaming<ReadRowsResponse>>, Status>
Streams back the contents of all requested rows in key order, optionally applying the same Reader filter to each. Depending on their size, rows and cells may be broken up across multiple responses, but atomicity of each row will still be preserved. See the ReadRowsResponse documentation for details.
Sourcepub async fn sample_row_keys(
&mut self,
request: impl IntoRequest<SampleRowKeysRequest>,
) -> Result<Response<Streaming<SampleRowKeysResponse>>, Status>
pub async fn sample_row_keys( &mut self, request: impl IntoRequest<SampleRowKeysRequest>, ) -> Result<Response<Streaming<SampleRowKeysResponse>>, Status>
Returns a sample of row keys in the table. The returned row keys will delimit contiguous sections of the table of approximately equal size, which can be used to break up the data for distributed tasks like mapreduces.
Sourcepub async fn mutate_row(
&mut self,
request: impl IntoRequest<MutateRowRequest>,
) -> Result<Response<MutateRowResponse>, Status>
pub async fn mutate_row( &mut self, request: impl IntoRequest<MutateRowRequest>, ) -> Result<Response<MutateRowResponse>, Status>
Mutates a row atomically. Cells already present in the row are left
unchanged unless explicitly changed by mutation
.
Sourcepub async fn mutate_rows(
&mut self,
request: impl IntoRequest<MutateRowsRequest>,
) -> Result<Response<Streaming<MutateRowsResponse>>, Status>
pub async fn mutate_rows( &mut self, request: impl IntoRequest<MutateRowsRequest>, ) -> Result<Response<Streaming<MutateRowsResponse>>, Status>
Mutates multiple rows in a batch. Each individual row is mutated atomically as in MutateRow, but the entire batch is not executed atomically.
Sourcepub async fn check_and_mutate_row(
&mut self,
request: impl IntoRequest<CheckAndMutateRowRequest>,
) -> Result<Response<CheckAndMutateRowResponse>, Status>
pub async fn check_and_mutate_row( &mut self, request: impl IntoRequest<CheckAndMutateRowRequest>, ) -> Result<Response<CheckAndMutateRowResponse>, Status>
Mutates a row atomically based on the output of a predicate Reader filter.
Sourcepub async fn ping_and_warm(
&mut self,
request: impl IntoRequest<PingAndWarmRequest>,
) -> Result<Response<PingAndWarmResponse>, Status>
pub async fn ping_and_warm( &mut self, request: impl IntoRequest<PingAndWarmRequest>, ) -> Result<Response<PingAndWarmResponse>, Status>
Warm up associated instance metadata for this connection. This call is not required but may be useful for connection keep-alive.
Sourcepub async fn read_modify_write_row(
&mut self,
request: impl IntoRequest<ReadModifyWriteRowRequest>,
) -> Result<Response<ReadModifyWriteRowResponse>, Status>
pub async fn read_modify_write_row( &mut self, request: impl IntoRequest<ReadModifyWriteRowRequest>, ) -> Result<Response<ReadModifyWriteRowResponse>, Status>
Modifies a row atomically on the server. The method reads the latest existing timestamp and value from the specified columns and writes a new entry based on pre-defined read/modify/write rules. The new value for the timestamp is the greater of the existing timestamp or the current server time. The method returns the new contents of all modified cells.
Sourcepub async fn generate_initial_change_stream_partitions(
&mut self,
request: impl IntoRequest<GenerateInitialChangeStreamPartitionsRequest>,
) -> Result<Response<Streaming<GenerateInitialChangeStreamPartitionsResponse>>, Status>
pub async fn generate_initial_change_stream_partitions( &mut self, request: impl IntoRequest<GenerateInitialChangeStreamPartitionsRequest>, ) -> Result<Response<Streaming<GenerateInitialChangeStreamPartitionsResponse>>, Status>
NOTE: This API is intended to be used by Apache Beam BigtableIO.
Returns the current list of partitions that make up the table’s
change stream. The union of partitions will cover the entire keyspace.
Partitions can be read with ReadChangeStream
.
Sourcepub async fn read_change_stream(
&mut self,
request: impl IntoRequest<ReadChangeStreamRequest>,
) -> Result<Response<Streaming<ReadChangeStreamResponse>>, Status>
pub async fn read_change_stream( &mut self, request: impl IntoRequest<ReadChangeStreamRequest>, ) -> Result<Response<Streaming<ReadChangeStreamResponse>>, Status>
NOTE: This API is intended to be used by Apache Beam BigtableIO. Reads changes from a table’s change stream. Changes will reflect both user-initiated mutations and mutations that are caused by garbage collection.
Sourcepub async fn execute_query(
&mut self,
request: impl IntoRequest<ExecuteQueryRequest>,
) -> Result<Response<Streaming<ExecuteQueryResponse>>, Status>
pub async fn execute_query( &mut self, request: impl IntoRequest<ExecuteQueryRequest>, ) -> Result<Response<Streaming<ExecuteQueryResponse>>, Status>
Executes a BTQL query against a particular Cloud Bigtable instance.
Trait Implementations§
Source§impl<T: Clone> Clone for BigtableClient<T>
impl<T: Clone> Clone for BigtableClient<T>
Source§fn clone(&self) -> BigtableClient<T>
fn clone(&self) -> BigtableClient<T>
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreAuto Trait Implementations§
impl<T> !Freeze for BigtableClient<T>
impl<T> RefUnwindSafe for BigtableClient<T>where
T: RefUnwindSafe,
impl<T> Send for BigtableClient<T>where
T: Send,
impl<T> Sync for BigtableClient<T>where
T: Sync,
impl<T> Unpin for BigtableClient<T>where
T: Unpin,
impl<T> UnwindSafe for BigtableClient<T>where
T: UnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
Source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request