P
- payload's typepublic abstract class Consumer<P>
extends java.lang.Object
Constructor and Description |
---|
Consumer() |
Modifier and Type | Method and Description |
---|---|
void |
assign(java.util.List<TopicPartition> partitions)
Manually assign a list of partition to this consumer.
|
java.util.Set<TopicPartition> |
assignment()
Get the set of topicPartitions currently assigned to this consumer.
|
java.util.Map<TopicPartition,java.lang.Long> |
beginningOffsets(java.util.List<TopicPartition> partitions)
Get the first offset for the given topicPartitions.
|
ConsumerMetricPerClientIdAndTopics |
bytesFetchRequestSizeAvgMetric()
Average bytes per fetch request for each consumer and its topics.
|
ConsumerMetricPerClientIdAndTopics |
bytesPerSecondAvgMetric()
Average bytes consumed per second for each consumer and its topics.
|
ConsumerMetricPerClientIdAndTopics |
bytesTotalMetric()
Total bytes consumed per consumer and its topics.
|
void |
close()
Close the consumer, waiting for up to the default timeout of 30 seconds for any needed cleanup.
|
void |
close(java.time.Duration timeout)
Tries to close the consumer cleanly within the specified timeout.
|
void |
commitAsync()
Commit offsets returned on the last
poll() for all the subscribed list of topics and
partition. |
void |
commitAsync(java.util.Map<TopicPartition,OffsetAndMetadata> offsets,
OffsetCommitCallback offsetCommitCallback)
Commit the specified offsets for the specified list of topics and topicPartitions to Kafka.
|
void |
commitAsync(OffsetCommitCallback offsetCommitCallback)
Commit offsets returned on the last
poll() for the subscribed list of topics and
topicPartitions. |
void |
commitSync()
Commit offsets returned on the last
poll() for all the subscribed list of topics
and topicPartitions. |
void |
commitSync(java.util.Map<TopicPartition,OffsetAndMetadata> offsets)
Commit the specified offsets for the specified list of topics and topicPartitions.
|
OffsetAndMetadata |
committed(TopicPartition partition)
Get the last committed offset for the given partition (whether the commit happened by this process or
another).
|
java.util.Map<TopicPartition,java.lang.Long> |
endOffsets(java.util.List<TopicPartition> partitions)
Get the end offsets for the given topicPartitions.
|
ConsumerMetricPerClientId |
fetchRequestAvgMetric()
The number of fetch requests per second.
|
java.util.Map<java.lang.String,java.util.List<PartitionInfo>> |
listTopics()
Get metadata about topicPartitions for all topics that the user is authorized to view.
|
java.util.Map<MetricName,? extends org.apache.kafka.common.Metric> |
metrics()
Get the metrics kept by the consumer.
|
java.util.Map<TopicPartition,OffsetAndTimestamp> |
offsetsForTimes(java.util.Map<TopicPartition,java.lang.Long> timestampsToSearch)
Look up the offsets for the given topicPartitions by timestamp.
|
java.util.List<PartitionInfo> |
partitionsFor(java.lang.String topic)
Get metadata about the topicPartitions for a given topic.
|
void |
pause(java.util.Collection<TopicPartition> partitions)
Suspend fetching from the requested topicPartitions.
|
java.util.Set<TopicPartition> |
paused()
Get the set of topicPartitions that were previously paused by a call to
pause(Collection) . |
ConsumerRecords |
poll(java.time.Duration timeout)
Fetch data for the topics or partitions specified using one of the subscribe/assign APIs.
|
ConsumerRecords |
poll(long timeout)
Fetch data for the topics or topicPartitions specified using one of the subscribe/assign APIs.
|
long |
position(TopicPartition partition)
Get the offset of the next record that will be fetched (if a record with that offset exists).
|
ConsumerMetricPerClientIdAndTopicPartitions |
recordsLagAvgPerTopicPartition()
The average lag of the partition.
|
ConsumerMetricPerClientId |
recordsLagMaxMetric()
The maximum lag in terms of number of records for any partition in this window.
|
ConsumerMetricPerClientIdAndTopicPartitions |
recordsLagMaxPerTopicPartition()
The max lag of the partition.
|
ConsumerMetricPerClientIdAndTopicPartitions |
recordsLagPerTopicPartition()
The latest lag of the partition.
|
ConsumerMetricPerClientIdAndTopics |
recordsPerRequestAvgMetric()
Average number of records gotten per fetch request for each consumer and its topics.
|
ConsumerMetricPerClientIdAndTopics |
recordsPerSecondAvgMetric()
Average number of records consumed per seconds for each consumer and its topics.
|
ConsumerMetricPerClientIdAndTopics |
recordsTotalMetric()
Total number of records consumed per consumer and its topics.
|
void |
resume(java.util.Collection<TopicPartition> partitions)
Resume specified topicPartitions which have been paused with
pause(Collection) . |
void |
seek(TopicPartition partition,
long offset)
Overrides the fetch offsets that the consumer will use on the next
poll(timeout) . |
void |
seekToBeginning(TopicPartition... topicPartitions)
Seek to the first offset for each of the given topicPartitions.
|
void |
seekToEnd(TopicPartition... topicPartitions)
Seek to the last offset for each of the given topicPartitions.
|
void |
subscribe(java.util.List<java.lang.String> topics)
Subscribe to the given list of topics to get dynamically
assigned topicPartitions.
|
void |
subscribe(java.util.List<java.lang.String> topics,
ConsumerRebalanceListener consumerRebalanceListener)
Subscribe to the given list of topics to get dynamically
assigned topicPartitions.
|
void |
subscribe(java.util.Map<java.lang.String,java.util.List<java.lang.String>> groupTopics)
Subscribe to the given list of tenantGroups and topics to get dynamically
assigned topicPartitions.
|
void |
subscribe(java.util.Map<java.lang.String,java.util.List<java.lang.String>> groupTopics,
ConsumerRebalanceListener consumerRebalanceListener)
Subscribe to the given list of tenantGroups and topics to get dynamically
assigned topicPartitions.
|
void |
subscribe(java.util.regex.Pattern pattern,
ConsumerRebalanceListener consumerRebalanceListener)
Subscribe to all topics matching specified pattern to get dynamically assigned topicPartitions.
|
java.util.Set<java.lang.String> |
subscription()
Get the current subscription.
|
ConsumerMetricPerClientId |
totalFetchRequestMetric()
Total number of fetch request for each consumer.
|
void |
unsubscribe()
Unsubscribe from topics currently subscribed with
subscribe(List) . |
void |
wakeup()
Wakeup the consumer.
|
public void subscribe(java.util.Map<java.lang.String,java.util.List<java.lang.String>> groupTopics)
assign(List)
.
If the given list of topics is empty, it is treated the same as unsubscribe()
.
As part of group management, the consumer will keep track of the list of consumers that belong to a particular group and will trigger a rebalance operation if one of the following events trigger l-
When any of these events are triggered, the provided listener will be invoked first to indicate that
the consumer's assignment has been revoked, and then again when the new assignment has been received.
Note that this listener will immediately override any listener set in a previous call to subscribe.
It is guaranteed, however, that the topicPartitions revoked/assigned through this interface are from topics
subscribed in this call. See ConsumerRebalanceListener
for more details.
groupTopics
- The list of topics to subscribe toorg.apache.commons.lang.NullArgumentException
- if any argument is nullDatabusClientRuntimeException
- if subscription fails.public void subscribe(java.util.Map<java.lang.String,java.util.List<java.lang.String>> groupTopics, ConsumerRebalanceListener consumerRebalanceListener)
assign(List)
.
If the given list of topics is empty, it is treated the same as unsubscribe()
.
As part of group management, the consumer will keep track of the list of consumers that belong to a particular group and will trigger a rebalance operation if one of the following events trigger l-
When any of these events are triggered, the provided listener will be invoked first to indicate that
the consumer's assignment has been revoked, and then again when the new assignment has been received.
Note that this listener will immediately override any listener set in a previous call to subscribe.
It is guaranteed, however, that the topicPartitions revoked/assigned through this interface are from topics
subscribed in this call. See ConsumerRebalanceListener
for more details.
groupTopics
- The list of topics to subscribe toconsumerRebalanceListener
- Non-null listener getInstance to get notifications on partition
assignment/revocation for the
subscribed topicsorg.apache.commons.lang.NullArgumentException
- if any argument is nullDatabusClientRuntimeException
- if subscription fails.public void subscribe(java.util.List<java.lang.String> topics, ConsumerRebalanceListener consumerRebalanceListener)
assign(List)
.
If the given list of topics is empty, it is treated the same as unsubscribe()
.
As part of group management, the consumer will keep track of the list of consumers that belong to a particular group and will trigger a rebalance operation if one of the following events trigger -
When any of these events are triggered, the provided listener will be invoked first to indicate that
the consumer's assignment has been revoked, and then again when the new assignment has been received.
Note that this listener will immediately override any listener set in a previous call to subscribe.
It is guaranteed, however, that the topicPartitions revoked/assigned through this interface are from topics
subscribed in this call. See ConsumerRebalanceListener
for more details.
topics
- The non-null list of topics to subscribe toconsumerRebalanceListener
- Non-null listener getInstance to get notifications on partition
assignment/revocation for the
subscribed topicsorg.apache.commons.lang.NullArgumentException
- if any argument is nullDatabusClientRuntimeException
- if subscription fails.public void subscribe(java.util.List<java.lang.String> topics)
assign(List)
.
If the given list of topics is empty, it is treated the same as unsubscribe()
.
As part of group management, the consumer will keep track of the list of consumers that belong to a particular group and will trigger a rebalance operation if one of the following events trigger -
When any of these events are triggered, the provided listener will be invoked first to indicate that
the consumer's assignment has been revoked, and then again when the new assignment has been received.
Note that this listener will immediately override any listener set in a previous call to subscribe.
It is guaranteed, however, that the topicPartitions revoked/assigned through this interface are from topics
subscribed in this call. See ConsumerRebalanceListener
for more details.
topics
- The list of topics to subscribe toorg.apache.commons.lang.NullArgumentException
- if any argument is nullDatabusClientRuntimeException
- if subscription fails.public java.util.Set<TopicPartition> assignment()
assign(List)
then this will simply return the same topicPartitions that
were assigned. If topic subscription was used, then this will give the set of topic topicPartitions currently
assigned
to the consumer (which may be none if the assignment hasn't happened yet, or the topicPartitions are in the
process of getting reassigned).DatabusClientRuntimeException
- if it fails.public java.util.Set<java.lang.String> subscription()
subscribe(List, ConsumerRebalanceListener)
, or an empty set if no such call has been made.DatabusClientRuntimeException
- if it fails.public void assign(java.util.List<TopicPartition> partitions)
Manual topic assignment through this method does not use the consumer's group management
functionality. As such, there will be no rebalance operation triggered when group membership or cluster and topic
metadata change. Note that it is not possible to use both manual partition assignment with assign(List)
and group assignment with subscribe(List, ConsumerRebalanceListener)
.
partitions
- The list of topicPartitions to assign this consumerDatabusClientRuntimeException
- if it fails.public void subscribe(java.util.regex.Pattern pattern, ConsumerRebalanceListener consumerRebalanceListener)
As part of group management, the consumer will keep track of the list of consumers that belong to a particular group and will trigger a rebalance operation if one of the following events trigger -
pattern
- Pattern to subscribe toconsumerRebalanceListener
- consumer listener for rebalancing databus operationsDatabusClientRuntimeException
- if it fails.public void unsubscribe()
subscribe(List)
. This
also clears any topicPartitions directly assigned through assign(List)
.DatabusClientRuntimeException
- if it fails.public ConsumerRecords poll(long timeout)
On each poll, consumer will try to use the last consumed offset as the starting offset and fetch sequentially.
The last
consumed offset can be manually set through seek(TopicPartition, long)
or automatically set as the
last committed
offset for the subscribed list of topicPartitions
timeout
- The time, in milliseconds, spent waiting in poll if data is not available. If 0, returns
immediately with any records that are available now. Must not be negative.DatabusClientRuntimeException
- if poll fails.The original cause could be any of these exceptions:
wakeup()
is called before or
while this
function is called
public ConsumerRecords poll(java.time.Duration timeout)
On each poll, consumer will try to use the last consumed offset as the starting offset and fetch sequentially.
The last
consumed offset can be manually set through seek(TopicPartition, long)
or automatically set as the last
committed
offset for the subscribed list of partitions
This method returns immediately if there are records available. Otherwise, it will await the passed timeout.
If the timeout expires, an empty record set will be returned. Note that this method may block beyond the
timeout in order to execute custom ConsumerRebalanceListener
callbacks.
timeout
- The time, in milliseconds, spent waiting in poll if data is not available. If 0, returns
immediately with any records that are available now. Must not be negative.DatabusClientRuntimeException
- if poll fails.The original cause could be any of these exceptions:
wakeup()
is called before or while this
function is called
Long.MAX_VALUE
milliseconds.
org.apache.kafka.common.internals.Topic#validate(String)
)
public void commitSync()
poll()
for all the subscribed list of topics
and topicPartitions.
This commits offsets only to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used.
This is a synchronous commits and will block until either the commit succeeds or an unrecoverable error is encountered (in which case it is thrown to the caller).
DatabusClientRuntimeException
- if a commitSync fails. The original cause could be any of these exceptions:
subscribe(List)
,
or if there is an active group with the same groupId which is using group
management.
wakeup()
is called before or
while this function is called
public void commitSync(java.util.Map<TopicPartition,OffsetAndMetadata> offsets)
This commits offsets to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used.
This is a synchronous commits and will block until either the commit succeeds or an unrecoverable error is encountered (in which case it is thrown to the caller).
offsets
- A map of offsets by partition with associated metadataDatabusClientRuntimeException
- if commitSync fails. The original cause could be any of these exceptions:
subscribe(List)
,
or if there is an active group with the same groupId which is using group
management.
wakeup()
is called before or
while this
function is called
public void commitAsync()
poll()
for all the subscribed list of topics and
partition.
Same as commitAsync(null)
DatabusClientRuntimeException
- if a commitAsync failspublic void commitAsync(OffsetCommitCallback offsetCommitCallback)
poll()
for the subscribed list of topics and
topicPartitions.
This commits offsets only to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used.
This is an asynchronous call and will not block. Any errors encountered are either passed to the callback (if provided) or discarded.
offsetCommitCallback
- Callback to invoke when the commit completesDatabusClientRuntimeException
- if a commitAsync failspublic void seek(TopicPartition partition, long offset)
poll(timeout)
. If this API
is invoked for the same partition more than once, the latest offset will be used on the next poll(). Note that
you may lose data if this API is arbitrarily used in the middle of consumption, to reset the fetch offsetspartition
- partition to seekoffset
- offset to seekDatabusClientRuntimeException
- if a seek failspublic void seekToBeginning(TopicPartition... topicPartitions)
poll(long)
or position(TopicPartition)
are called.topicPartitions
- topicPartitions to seekDatabusClientRuntimeException
- if it failspublic void seekToEnd(TopicPartition... topicPartitions)
poll(long)
or position(TopicPartition)
are
called.topicPartitions
- topicPartitions to seekDatabusClientRuntimeException
- if it failspublic long position(TopicPartition partition)
partition
- The partition to get the position forDatabusClientRuntimeException
- if position fails. The original cause could be any of these exceptions:
wakeup()
is called before or
while this
function is called
public OffsetAndMetadata committed(TopicPartition partition)
This call may block to do a remote call if the partition in question isn't assigned to this consumer or if the consumer hasn't yet initialized its cache of committed offsets.
partition
- The partition to checkDatabusClientRuntimeException
- if committed fails. The original cause could be any of these exceptions:
wakeup()
is called before or
while this
function is called
public java.util.Map<MetricName,? extends org.apache.kafka.common.Metric> metrics()
DatabusClientRuntimeException
- if metrics fails.public ConsumerMetricPerClientIdAndTopics recordsPerSecondAvgMetric()
public ConsumerMetricPerClientIdAndTopics recordsTotalMetric()
public ConsumerMetricPerClientIdAndTopics bytesPerSecondAvgMetric()
public ConsumerMetricPerClientIdAndTopics bytesTotalMetric()
public ConsumerMetricPerClientIdAndTopics recordsPerRequestAvgMetric()
public ConsumerMetricPerClientId totalFetchRequestMetric()
public ConsumerMetricPerClientId fetchRequestAvgMetric()
public ConsumerMetricPerClientId recordsLagMaxMetric()
public ConsumerMetricPerClientIdAndTopics bytesFetchRequestSizeAvgMetric()
public ConsumerMetricPerClientIdAndTopicPartitions recordsLagPerTopicPartition()
public ConsumerMetricPerClientIdAndTopicPartitions recordsLagAvgPerTopicPartition()
public ConsumerMetricPerClientIdAndTopicPartitions recordsLagMaxPerTopicPartition()
public java.util.List<PartitionInfo> partitionsFor(java.lang.String topic)
topic
- The topic to get partition metadata forDatabusClientRuntimeException
- if partitionsFor fails. The original cause could be any of these
exceptions:
wakeup()
is called before or
while this
function is called
public java.util.Map<java.lang.String,java.util.List<PartitionInfo>> listTopics()
DatabusClientRuntimeException
- if listTopics fails. The original cause could be any of these exceptions:
wakeup()
is called before or
while this
function is called
public void pause(java.util.Collection<TopicPartition> partitions)
poll(long)
will not return
any records from these topicPartitions until they have been resumed using resume(Collection)
.
Note that this method does not affect partition subscription. In particular, it does not cause a group
rebalance when automatic assignment is used.partitions
- The topicPartitions which should be pausedDatabusClientRuntimeException
- if it fails.public void resume(java.util.Collection<TopicPartition> partitions)
pause(Collection)
. New calls to
poll(long)
will return records from these topicPartitions if there are any to be fetched.
If the topicPartitions were not previously paused, this method is a no-op.partitions
- The topicPartitions which should be resumedDatabusClientRuntimeException
- if it fails.public void close()
close(Duration)
for details. Note that wakeup()
cannot be used to interrupt close.DatabusClientRuntimeException
- if it fails.public void close(java.time.Duration timeout)
timeout
for the consumer to complete pending commits and leave the group.
If auto-commit is enabled, this will commit the current offsets if possible within the
timeout. If the consumer is unable to complete offset commits and gracefully leave the group
before the timeout expires, the consumer is force closed. Note that wakeup()
cannot be
used to interrupt close.timeout
- The maximum time to wait for consumer to close gracefully. The value must be
non-negative. Specifying a timeout of zero means do not wait for pending requests to complete.DatabusClientRuntimeException
- If there is a errorpublic void wakeup()
WakeupException
.DatabusClientRuntimeException
- if it fails.public void commitAsync(java.util.Map<TopicPartition,OffsetAndMetadata> offsets, OffsetCommitCallback offsetCommitCallback)
This commits offsets to Kafka. The offsets committed using this API will be used on the first fetch after every rebalance and also on startup. As such, if you need to store offsets in anything other than Kafka, this API should not be used.
This is an asynchronous call and will not block. Any errors encountered are either passed to the callback (if provided) or discarded.
offsets
- A map of offsets by partition with associate metadata.
This map will be copied internally, so it
is safe to mutate the map after returning.offsetCommitCallback
- Callback to invoke when the commit completes.DatabusClientRuntimeException
- if it failspublic java.util.Map<TopicPartition,OffsetAndTimestamp> offsetsForTimes(java.util.Map<TopicPartition,java.lang.Long> timestampsToSearch)
This is a blocking call. The consumer does not have to be assigned the topicPartitions. If the message format version in a partition is before 0.10.0, i.e. the messages do not have timestamps, null will be returned for that partition.
timestampsToSearch
- the mapping from partition to the timestamp to look up.null
will be returned for the partition if there is no
such message.DatabusClientRuntimeException
- if it fails. The original cause could be any of these exceptions:
default.api.timeout.ms
expires.
public java.util.Map<TopicPartition,java.lang.Long> beginningOffsets(java.util.List<TopicPartition> partitions)
This method does not change the current consumer position of the topicPartitions. *
partitions
- the topicPartitions to get the earliest offsets.DatabusClientRuntimeException
- if it fails. The original cause could be any of these exceptions:
default.api.timeout.ms
public java.util.Map<TopicPartition,java.lang.Long> endOffsets(java.util.List<TopicPartition> partitions)
read_uncommitted
isolation level, the
end
offset is the high watermark (that is, the offset of the last successfully replicated message plus one). For
read_committed
consumers, the end offset is the last stable offset (LSO), which is the minimum of
the high watermark and the smallest offset of any open transaction. Finally, if the partition has never been
written to, the end offset is 0.
This method does not change the current consumer position of the topicPartitions.
partitions
- the topicPartitions to get the end offsets.DatabusClientRuntimeException
- if it fails. The original cause could be any of these exceptions:
request.timeout.ms
expires
public java.util.Set<TopicPartition> paused()
pause(Collection)
.