pub struct StreamConsumer<C = DefaultConsumerContext, R = DefaultRuntime>where
C: ConsumerContext,{ /* private fields */ }
Expand description
A high-level consumer with a Stream
interface.
This consumer doesn’t need to be polled explicitly. Extracting an item from
the stream returned by the stream
will
implicitly poll the underlying Kafka consumer.
If you activate the consumer group protocol by calling
subscribe
, the stream consumer will integrate with
librdkafka’s liveness detection as described in KIP-62. You must be sure
that you attempt to extract a message from the stream consumer at least
every max.poll.interval.ms
milliseconds, or librdkafka will assume that
the processing thread is wedged and leave the consumer groups.
Implementations§
Source§impl<C, R> StreamConsumer<C, R>where
C: ConsumerContext + 'static,
impl<C, R> StreamConsumer<C, R>where
C: ConsumerContext + 'static,
Sourcepub fn stream(&self) -> MessageStream<'_>
pub fn stream(&self) -> MessageStream<'_>
Constructs a stream that yields messages from this consumer.
It is legal to have multiple live message streams for the same consumer, and to move those message streams across threads. Note, however, that the message streams share the same underlying state. A message received by the consumer will be delivered to only one of the live message streams. If you seek the underlying consumer, all message streams created from the consumer will begin to draw messages from the new position of the consumer.
If you want multiple independent views of a Kafka topic, create multiple consumers, not multiple message streams.
Sourcepub async fn recv(&self) -> Result<BorrowedMessage<'_>, KafkaError>
pub async fn recv(&self) -> Result<BorrowedMessage<'_>, KafkaError>
Receives the next message from the stream.
This method will block until the next message is available or an error
occurs. It is legal to call recv
from multiple threads simultaneously.
This method is cancellation safe.
Note that this method is exactly as efficient as constructing a single-use message stream and extracting one message from it:
use futures::stream::StreamExt;
consumer.stream().next().await.expect("MessageStream never returns None");
Sourcepub fn split_partition_queue(
self: &Arc<Self>,
topic: &str,
partition: i32,
) -> Option<StreamPartitionQueue<C, R>>
pub fn split_partition_queue( self: &Arc<Self>, topic: &str, partition: i32, ) -> Option<StreamPartitionQueue<C, R>>
Splits messages for the specified partition into their own stream.
If the topic
or partition
is invalid, returns None
.
After calling this method, newly-fetched messages for the specified
partition will be returned via StreamPartitionQueue::recv
rather
than StreamConsumer::recv
. Note that there may be buffered messages
for the specified partition that will continue to be returned by
StreamConsumer::recv
. For best results, call split_partition_queue
before the first call to
StreamConsumer::recv
.
You must periodically await StreamConsumer::recv
, even if no messages
are expected, to serve callbacks. Consider using a background task like:
tokio::spawn(async move {
let message = stream_consumer.recv().await;
panic!("main stream consumer queue unexpectedly received message: {:?}", message);
})
Note that calling Consumer::assign
will deactivate any existing
partition queues. You will need to call this method for every partition
that should be split after every call to assign
.
Beware that this method is implemented for &Arc<Self>
, not &self
.
You will need to wrap your consumer in an Arc
in order to call this
method. This design permits moving the partition queue to another thread
while ensuring the partition queue does not outlive the consumer.
Trait Implementations§
Source§impl<C, R> Consumer<C> for StreamConsumer<C, R>where
C: ConsumerContext,
R: AsyncRuntime,
impl<C, R> Consumer<C> for StreamConsumer<C, R>where
C: ConsumerContext,
R: AsyncRuntime,
Source§fn group_metadata(&self) -> Option<ConsumerGroupMetadata>
fn group_metadata(&self) -> Option<ConsumerGroupMetadata>
Source§fn subscribe(&self, topics: &[&str]) -> KafkaResult<()>
fn subscribe(&self, topics: &[&str]) -> KafkaResult<()>
Source§fn unsubscribe(&self)
fn unsubscribe(&self)
Source§fn assign(&self, assignment: &TopicPartitionList) -> KafkaResult<()>
fn assign(&self, assignment: &TopicPartitionList) -> KafkaResult<()>
Source§fn unassign(&self) -> KafkaResult<()>
fn unassign(&self) -> KafkaResult<()>
Source§fn incremental_assign(&self, assignment: &TopicPartitionList) -> KafkaResult<()>
fn incremental_assign(&self, assignment: &TopicPartitionList) -> KafkaResult<()>
Source§fn incremental_unassign(
&self,
assignment: &TopicPartitionList,
) -> KafkaResult<()>
fn incremental_unassign( &self, assignment: &TopicPartitionList, ) -> KafkaResult<()>
Source§fn assignment_lost(&self) -> bool
fn assignment_lost(&self) -> bool
Source§fn seek<'life0, 'life1, 'async_trait, T>(
&'life0 self,
topic: &'life1 str,
partition: i32,
offset: Offset,
timeout: T,
) -> Pin<Box<dyn Future<Output = KafkaResult<()>> + Send + 'async_trait>>
fn seek<'life0, 'life1, 'async_trait, T>( &'life0 self, topic: &'life1 str, partition: i32, offset: Offset, timeout: T, ) -> Pin<Box<dyn Future<Output = KafkaResult<()>> + Send + 'async_trait>>
offset
for the specified topic
and partition
. After a
successful call to seek
, the next poll of the consumer will return the
message with offset
.Source§fn seek_partitions<'life0, 'async_trait, T>(
&'life0 self,
topic_partition_list: TopicPartitionList,
timeout: T,
) -> Pin<Box<dyn Future<Output = KafkaResult<TopicPartitionList>> + Send + 'async_trait>>
fn seek_partitions<'life0, 'async_trait, T>( &'life0 self, topic_partition_list: TopicPartitionList, timeout: T, ) -> Pin<Box<dyn Future<Output = KafkaResult<TopicPartitionList>> + Send + 'async_trait>>
topic_partition_list
to the per-partition offset
in the offset
field of TopicPartitionListElem
.
The offset can be either absolute (>= 0) or a logical offset.
Seek should only be performed on already assigned/consumed partitions.
Individual partition errors are reported in the per-partition error
field of
TopicPartitionListElem
.Source§fn commit<'life0, 'life1, 'async_trait>(
&'life0 self,
topic_partition_list: &'life1 TopicPartitionList,
mode: CommitMode,
) -> Pin<Box<dyn Future<Output = KafkaResult<()>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
fn commit<'life0, 'life1, 'async_trait>(
&'life0 self,
topic_partition_list: &'life1 TopicPartitionList,
mode: CommitMode,
) -> Pin<Box<dyn Future<Output = KafkaResult<()>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
Source§fn commit_consumer_state<'life0, 'async_trait>(
&'life0 self,
mode: CommitMode,
) -> Pin<Box<dyn Future<Output = KafkaResult<()>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn commit_consumer_state<'life0, 'async_trait>(
&'life0 self,
mode: CommitMode,
) -> Pin<Box<dyn Future<Output = KafkaResult<()>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Source§fn commit_message<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
message: &'life1 BorrowedMessage<'life2>,
mode: CommitMode,
) -> Pin<Box<dyn Future<Output = KafkaResult<()>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
fn commit_message<'life0, 'life1, 'life2, 'async_trait>(
&'life0 self,
message: &'life1 BorrowedMessage<'life2>,
mode: CommitMode,
) -> Pin<Box<dyn Future<Output = KafkaResult<()>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
'life1: 'async_trait,
'life2: 'async_trait,
Source§fn store_offset(
&self,
topic: &str,
partition: i32,
offset: i64,
) -> KafkaResult<()>
fn store_offset( &self, topic: &str, partition: i32, offset: i64, ) -> KafkaResult<()>
enable.auto.offset.store
should be set to false
in the
config.Source§fn store_offset_from_message(
&self,
message: &BorrowedMessage<'_>,
) -> KafkaResult<()>
fn store_offset_from_message( &self, message: &BorrowedMessage<'_>, ) -> KafkaResult<()>
Consumer::store_offset
, but the offset to store is derived from
the provided message.Source§fn store_offsets(&self, tpl: &TopicPartitionList) -> KafkaResult<()>
fn store_offsets(&self, tpl: &TopicPartitionList) -> KafkaResult<()>
enable.auto.offset.store
should be set to false
in the config.Source§fn subscription(&self) -> KafkaResult<TopicPartitionList>
fn subscription(&self) -> KafkaResult<TopicPartitionList>
Source§fn assignment(&self) -> KafkaResult<TopicPartitionList>
fn assignment(&self) -> KafkaResult<TopicPartitionList>
Source§fn committed<'life0, 'async_trait, T>(
&'life0 self,
timeout: T,
) -> Pin<Box<dyn Future<Output = KafkaResult<TopicPartitionList>> + Send + 'async_trait>>
fn committed<'life0, 'async_trait, T>( &'life0 self, timeout: T, ) -> Pin<Box<dyn Future<Output = KafkaResult<TopicPartitionList>> + Send + 'async_trait>>
Source§fn committed_offsets<'life0, 'async_trait, T>(
&'life0 self,
tpl: TopicPartitionList,
timeout: T,
) -> Pin<Box<dyn Future<Output = KafkaResult<TopicPartitionList>> + Send + 'async_trait>>
fn committed_offsets<'life0, 'async_trait, T>( &'life0 self, tpl: TopicPartitionList, timeout: T, ) -> Pin<Box<dyn Future<Output = KafkaResult<TopicPartitionList>> + Send + 'async_trait>>
Source§fn offsets_for_timestamp<'life0, 'async_trait, T>(
&'life0 self,
timestamp: i64,
timeout: T,
) -> Pin<Box<dyn Future<Output = KafkaResult<TopicPartitionList>> + Send + 'async_trait>>
fn offsets_for_timestamp<'life0, 'async_trait, T>( &'life0 self, timestamp: i64, timeout: T, ) -> Pin<Box<dyn Future<Output = KafkaResult<TopicPartitionList>> + Send + 'async_trait>>
Source§fn offsets_for_times<'life0, 'async_trait, T>(
&'life0 self,
timestamps: TopicPartitionList,
timeout: T,
) -> Pin<Box<dyn Future<Output = KafkaResult<TopicPartitionList>> + Send + 'async_trait>>
fn offsets_for_times<'life0, 'async_trait, T>( &'life0 self, timestamps: TopicPartitionList, timeout: T, ) -> Pin<Box<dyn Future<Output = KafkaResult<TopicPartitionList>> + Send + 'async_trait>>
Source§fn position(&self) -> KafkaResult<TopicPartitionList>
fn position(&self) -> KafkaResult<TopicPartitionList>
Source§fn fetch_metadata<'life0, 'life1, 'async_trait, T>(
&'life0 self,
topic: Option<&'life1 str>,
timeout: T,
) -> Pin<Box<dyn Future<Output = KafkaResult<Metadata>> + Send + 'async_trait>>
fn fetch_metadata<'life0, 'life1, 'async_trait, T>( &'life0 self, topic: Option<&'life1 str>, timeout: T, ) -> Pin<Box<dyn Future<Output = KafkaResult<Metadata>> + Send + 'async_trait>>
Source§fn fetch_watermarks<'life0, 'life1, 'async_trait, T>(
&'life0 self,
topic: &'life1 str,
partition: i32,
timeout: T,
) -> Pin<Box<dyn Future<Output = KafkaResult<(i64, i64)>> + Send + 'async_trait>>
fn fetch_watermarks<'life0, 'life1, 'async_trait, T>( &'life0 self, topic: &'life1 str, partition: i32, timeout: T, ) -> Pin<Box<dyn Future<Output = KafkaResult<(i64, i64)>> + Send + 'async_trait>>
Source§fn fetch_group_list<'life0, 'life1, 'async_trait, T>(
&'life0 self,
group: Option<&'life1 str>,
timeout: T,
) -> Pin<Box<dyn Future<Output = KafkaResult<GroupList>> + Send + 'async_trait>>
fn fetch_group_list<'life0, 'life1, 'async_trait, T>( &'life0 self, group: Option<&'life1 str>, timeout: T, ) -> Pin<Box<dyn Future<Output = KafkaResult<GroupList>> + Send + 'async_trait>>
Source§fn pause(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
fn pause(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
Source§fn resume(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
fn resume(&self, partitions: &TopicPartitionList) -> KafkaResult<()>
Source§fn rebalance_protocol(&self) -> RebalanceProtocol
fn rebalance_protocol(&self) -> RebalanceProtocol
Source§fn context(&self) -> &Arc<C>
fn context(&self) -> &Arc<C>
ConsumerContext
used to create this
consumer.Source§impl<R> FromClientConfig for StreamConsumer<DefaultConsumerContext, R>where
R: AsyncRuntime,
impl<R> FromClientConfig for StreamConsumer<DefaultConsumerContext, R>where
R: AsyncRuntime,
Source§fn from_config<'life0, 'async_trait>(
config: &'life0 ClientConfig,
) -> Pin<Box<dyn Future<Output = KafkaResult<Self>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn from_config<'life0, 'async_trait>(
config: &'life0 ClientConfig,
) -> Pin<Box<dyn Future<Output = KafkaResult<Self>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Source§impl<C, R> FromClientConfigAndContext<C> for StreamConsumer<C, R>where
C: ConsumerContext + 'static,
R: AsyncRuntime,
Creates a new StreamConsumer
starting from a ClientConfig
.
impl<C, R> FromClientConfigAndContext<C> for StreamConsumer<C, R>where
C: ConsumerContext + 'static,
R: AsyncRuntime,
Creates a new StreamConsumer
starting from a ClientConfig
.