tessl install tessl/maven-org-apache-kafka--kafka-2-13@4.1.0Apache Kafka is a distributed event streaming platform that combines publish-subscribe messaging, durable storage, and real-time stream processing capabilities.
Apache Kafka is a distributed event streaming platform that provides three key capabilities:
Maven Coordinates:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId>
<version>4.1.1</version>
</dependency>For client-only applications (Producer/Consumer):
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>4.1.1</version>
</dependency>Get started quickly with Kafka:
| API | Use When | Key Features |
|---|---|---|
| Producer | Publishing events to topics | High throughput, exactly-once, transactions |
| Consumer | Reading events from topics | Consumer groups, offset management, ShareConsumer |
| Admin | Managing cluster resources | Topic/config management, monitoring |
| Streams | Real-time stream processing | Stateful transformations, exactly-once, DSL |
| Connect | Integrating external systems | Pre-built connectors, schema management |
Use Producer API when:
Use Consumer API when:
Use ShareConsumer API (New in 4.1.1) when:
Use Admin API when:
Use Streams API when:
Use Connect API when:
Producer API - Publish streams of records to Kafka topics
Consumer API - Subscribe to topics and process streams of records
Admin API - Manage Kafka resources programmatically
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.ProducerConfig;import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.ConsumerConfig;import org.apache.kafka.clients.admin.Admin;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.admin.CreateTopicsResult;import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;import org.apache.kafka.connect.source.SourceConnector;
import org.apache.kafka.connect.source.SourceTask;
import org.apache.kafka.connect.sink.SinkConnector;
import org.apache.kafka.connect.sink.SinkTask;
import org.apache.kafka.connect.data.Schema;
import org.apache.kafka.connect.data.Struct;| Preset | Use Case | Key Settings |
|---|---|---|
| High Throughput | Batch processing | batch.size=65536, linger.ms=20, compression.type=lz4 |
| Low Latency | Real-time | batch.size=1, linger.ms=0, compression.type=none |
| High Durability | Critical data | acks=all, enable.idempotence=true, transactional.id=<id> |
| Preset | Use Case | Key Settings |
|---|---|---|
| High Throughput | Batch processing | fetch.min.bytes=10240, max.poll.records=1000 |
| Low Latency | Real-time | fetch.min.bytes=1, fetch.max.wait.ms=10 |
| Exactly-Once | Critical data | isolation.level=read_committed, enable.auto.commit=false |
| Preset | Use Case | Key Settings |
|---|---|---|
| High Throughput | Large-scale processing | num.stream.threads=4, cache.max.bytes.buffering=10485760 |
| Exactly-Once | Critical data | processing.guarantee=exactly_once_v2 |
| Low Latency | Real-time | commit.interval.ms=100, cache.max.bytes.buffering=0 |
// Producer with transactions
props.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "my-transactional-id");
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
Producer<String, String> producer = new KafkaProducer<>(props);
producer.initTransactions();
producer.beginTransaction();
producer.send(record);
producer.commitTransaction();// Multiple consumers coordinate through consumer groups
props.put(ConsumerConfig.GROUP_ID_CONFIG, "my-group");
Consumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("my-topic"));// Kafka Streams with state stores
StreamsBuilder builder = new StreamsBuilder();
KTable<String, Long> counts = builder.stream("input")
.groupByKey()
.count(Materialized.as("counts-store"));Step-by-step instructions for common workflows:
Real-world scenarios and use cases:
Detailed API specifications and configuration:
Comprehensive API references:
Shared functionality across APIs:
Common issues and solutions:
Quick optimization guides:
Kafka provides comprehensive security features:
See common/security.md for complete security documentation.
Migration Notes: