Kafka Configuration
The tdp-kafka chart deploys an Apache Kafka cluster managed by the Strimzi operator, along with a web interface (Kafka UI) and support for Debezium connectors via Kafka Connect.
Overview
| Property | Value |
|---|---|
| Chart | tdp-kafka |
| Kafka | 4.1.0 |
| Chart Version | 3.0.0 |
What is Strimzi?
Strimzi is a Kubernetes operator specialized in managing the lifecycle of Kafka clusters. Instead of managing Kafka processes manually, Strimzi translates native Kubernetes resources (such as KafkaNodePool and Kafka) into brokers that are configured, monitored, and updated declaratively.
Version 4.1.0 of the tdp-kafka chart operates in KRaft mode (Kafka without ZooKeeper), available since Kafka 3.3 and the default since Kafka 3.7. KRaft eliminates the dependency on ZooKeeper, simplifying the topology and reducing metadata operation latency.
See Apache Kafka — Concepts for a complete overview of the tool, its architecture, and how it works.
Deployed components
| Component | Description |
|---|---|
| Kafka Cluster | Brokers and controllers managed by Strimzi via KafkaNodePool |
| Entity Operator | Manages KafkaTopic and KafkaUser as Kubernetes resources |
| Kafka UI | Web interface for monitoring, browsing topics, and messages |
| Kafka Connect | Infrastructure for Debezium connectors (CDC) — optional |
Prerequisites
- Kubernetes 1.27+
- Helm 3.2.0+
- Available StorageClass with sufficient capacity for broker volumes
Installation (OCI)
helm install <release> \
oci://registry.tecnisys.com.br/tdp/charts/tdp-kafka \
-n <namespace> --create-namespace
Kafka cluster configuration
Node Pool
In Strimzi with KRaft, cluster nodes are declared in Node Pools (KafkaNodePool). Each pool defines the number of replicas, the roles the nodes play, and storage.
nodePool:
name: brokers
replicas: 4
roles: ["controller", "broker"]
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 100Gi
deleteClaim: true
kraftMetadata: shared
| Field | Description |
|---|---|
roles | A node can be controller, broker, or both. In small clusters, combine both roles. In large clusters, separate them into distinct pools |
storage.type: jbod | JBOD (Just a Bunch of Disks) allows multiple volumes per node, maximizing I/O throughput |
deleteClaim: true | PVCs will be deleted when the NodePool is removed. Set to false in production if you want to preserve data |
kraftMetadata: shared | Indicates that this volume also stores KRaft metadata |
With 4 replicas combining controller and broker, the KRaft quorum and data replication coexist on the same nodes. For heavy workloads, consider separate pools: one controller pool (3 replicas) and one broker pool (as many as needed).
Version and metadata
name: tdp-kafka
kafkaVersion: 4.1.0
metadataVersion: 4.1-IV1
clusterLabel: "tdp"
The metadataVersion field corresponds to the inter-broker protocol version of KRaft. Keep it aligned with kafkaVersion during new installations. During upgrades, increment it only after all brokers are on the new version.
Listeners
Listeners define the cluster entry points for producers and consumers.
listeners:
plain:
enabled: true
port: 9092
tls:
enabled: true
port: 9093
| Listener | Port | When to use |
|---|---|---|
plain | 9092 | Internal cluster communication, development environments, or trusted private networks |
tls | 9093 | Communication over untrusted networks or when clients require in-transit encryption |
The bootstrap service name follows the Strimzi convention: <name>-kafka-bootstrap. Example with the default name tdp-kafka:
- Plain:
tdp-kafka-kafka-bootstrap:9092 - TLS:
tdp-kafka-kafka-bootstrap:9093
Replication and durability
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
These parameters determine the balance between durability and availability:
| Parameter | Meaning |
|---|---|
default.replication.factor: 3 | Each topic will have 3 copies distributed across distinct brokers |
min.insync.replicas: 2 | A producer with acks=all only receives acknowledgment when at least 2 replicas are in sync. Tolerates the loss of 1 broker without data loss |
offsets.topic.replication.factor: 3 | Controls the durability of consumer offsets (internal topic __consumer_offsets) |
transaction.state.log.* | Durability of the transaction log (transactional production / exactly-once) |
In environments with fewer than 3 brokers, reduce *.replication.factor and min.insync.replicas to the actual number of nodes. Keeping factors higher than the number of brokers prevents topic creation.
Entity Operator
entityOperator:
enabled: true
The Entity Operator is a Strimzi component that watches KafkaTopic and KafkaUser resources in the namespace and reconciles them with the cluster. With it enabled, topics and users can be managed declaratively as Kubernetes manifests, without direct access to the Kafka administrative API.
Kafka UI
Kafka UI provides a graphical interface for inspecting topics, viewing messages, monitoring consumer groups, and tracking basic cluster metrics.
kafka-ui:
enabled: true
yamlApplicationConfig:
kafka:
clusters:
- name: TDP
bootstrapServers: <kafka-cluster-name>-kafka-bootstrap:9092
properties:
security.protocol: PLAINTEXT
auth:
type: LOGIN_FORM
spring:
security:
user:
name: admin
password: <ui-password>
service:
type: ClusterIP
port: 80
nodePort: 30081
Replace <kafka-cluster-name> with the name value defined in the cluster (e.g., tdp-kafka) and <ui-password> with a secure password.
The admin password (chart default) must not be kept in environments accessible outside the internal network. Change it before exposing the interface.
Access
Port-forward (Kafka UI)
kubectl port-forward -n <namespace> svc/kafka-ui 8080:80
Access at http://localhost:8080.
NodePort (Kafka UI)
kubectl get nodes -o wide
Access at http://<node-ip>:30081 (or the port configured in service.nodePort).
Bootstrap for producers and consumers
# Plain
<kafka-cluster-name>-kafka-bootstrap:9092
# TLS
<kafka-cluster-name>-kafka-bootstrap:9093
Kafka Connect / Debezium
The chart supports Debezium connectors for real-time change data capture (CDC) from relational databases. Configuration is done under kafkaConnects.*.
See Integrations — Kafka for complete examples of each connector type (PostgreSQL, MySQL, SQL Server, Oracle) and credentials management.
UI authentication
See Security — Kafka to configure LOGIN_FORM or LDAP authentication for the web interface.
Troubleshooting
# Cluster pod status
kubectl -n <namespace> get pods -l cluster=tdp
# Kafka cluster state (CRD resource)
kubectl -n <namespace> get kafkas
# Kafka UI logs
kubectl -n <namespace> logs -l app.kubernetes.io/name=kafka-ui
# Debezium connector state
kubectl -n <namespace> get kafkaconnectors
# Cluster details
kubectl -n <namespace> describe kafka <kafka-cluster-name>
Uninstallation
helm uninstall <release> -n <namespace>
Main parameters
| Parameter | Description | Default |
|---|---|---|
name | Kafka cluster name | tdp-kafka |
kafkaVersion | Kafka version | 4.1.0 |
metadataVersion | KRaft metadata version | 4.1-IV1 |
clusterLabel | Cluster label | tdp |
listeners.plain.enabled | Plain listener enabled | true |
listeners.plain.port | Plain listener port | 9092 |
listeners.tls.enabled | TLS listener enabled | true |
listeners.tls.port | TLS listener port | 9093 |
config.default.replication.factor | Default topic replication factor | 3 |
config.min.insync.replicas | Minimum in-sync replicas | 2 |
entityOperator.enabled | Entity Operator enabled | true |
nodePool.replicas | Number of brokers/controllers | 4 |
nodePool.storage.volumes[0].size | Volume size per node | 100Gi |
kafka-ui.enabled | Kafka UI enabled | true |
kafka-ui.service.type | UI service type | ClusterIP |