Quick Starts

Getting up and running with an Apache Kafka cluster on Kubernetes can be very simple, when using the Strimzi project!

Minikube provides a local Kubernetes, designed to make it easy to learn and develop for Kubernetes. The Kubernetes cluster is started either inside a virtual machine, a container or on bare-metal, depending on the minikube driver you choose.

Installing the dependencies

This quickstart assumes that you have the latest version of the minikube binary, which you can get from the minikube website.

Minikube requires a container or virtual machine manager. The Minikube documentation includes a list of suggested options in the getting started guide.

You’ll also need the kubectl binary, which you can get by following the kubectl installation instructions from the Kubernetes website.

Once you have all the binaries installed, make sure everything works:

# Validate minikube
minikube version

# Validate kubectl
kubectl version

Starting the Kubernetes cluster

Start a local development cluster of Minikube that runs in a container or virtual machine manager.

minikube start --memory=4096 # 2GB default memory isn't always enough

Deploy Strimzi using installation files

Before deploying the Strimzi cluster operator, create a namespace called kafka:

kubectl create namespace kafka

Apply the Strimzi install files, including ClusterRoles, ClusterRoleBindings and some Custom Resource Definitions (CRDs). The CRDs define the schemas used for the custom resources (CRs, such as Kafka, KafkaTopic and so on) you will be using to manage Kafka clusters, topics and users.

kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

The YAML files for ClusterRoles and ClusterRoleBindings downloaded from strimzi.io contain a default namespace of myproject. The query parameter namespace=kafka updates these files to use kafka instead. By specifying -n kafka when running kubectl create, the definitions and configurations without a namespace reference are also installed in the kafka namespace. If there is a mismatch between namespaces, then the Strimzi cluster operator will not have the necessary permissions to perform its operations.

Follow the deployment of the Strimzi cluster operator:

kubectl get pod -n kafka --watch

You can also follow the operator’s log:

kubectl logs deployment/strimzi-cluster-operator -n kafka -f

Once the operator is running it will watch for new custom resources and create the Kafka cluster, topics or users that correspond to those custom resources.

Create an Apache Kafka cluster

Create a new Kafka custom resource to get a small persistent Apache Kafka Cluster with one node for Apache Zookeeper and Apache Kafka:

# Apply the `Kafka` Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka 

Wait while Kubernetes starts the required pods, services, and so on:

kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka 

The above command might timeout if you’re downloading images over a slow connection. If that happens you can always run it again.

Send and receive messages

With the cluster running, run a simple producer to send messages to a Kafka topic (the topic is automatically created):

kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic

Once everything is set up correctly, you’ll see a prompt where you can type in your messages:

If you don't see a command prompt, try pressing enter.

>Hello strimzi!

And to receive them in a different terminal, run:

kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning

If everything works as expected, you’ll be able to see the message you produced in the previous step:

If you don't see a command prompt, try pressing enter.

>Hello strimzi!

Enjoy your Apache Kafka cluster, running on Minikube!

Deleting your Apache Kafka cluster

When you are finished with your Apache Kafka cluster, you can delete it by running:

kubectl -n kafka delete $(kubectl get strimzi -o name -n kafka)

This will remove all Strimzi custom resources, including the Apache Kafka cluster and any KafkaTopic custom resources but leave the Strimzi cluster operator running so that it can respond to new Kafka custom resources.

Deleting the Strimzi cluster operator

When you want to fully remove the Strimzi cluster operator and associated definitions, you can run:

kubectl -n kafka delete -f 'https://strimzi.io/install/latest?namespace=kafka'

Where next?

  • For an overview of the Strimzi components check out the overview guide.
  • For alternative examples of the custom resource that defines the Kafka cluster have a look at these examples

Kubernetes Kind is a Kubernetes cluster implemented as a single Docker image that runs as a container. It was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

When using a local install of Minikube or Minishift, the Kubernetes cluster is started inside a virtual machine, running a Linux kernel and a Docker daemon, consuming extra CPU and RAM.

Kind, on the other hand, requires no additional VM - it simply runs as a linux container with a set of processes using the same Linux kernel used by your Docker daemon. For this reason it is faster to start, and consumes less CPU and RAM than the alternatives, which is especially noticeable when running a Docker daemon natively on a Linux host.

Note: Kubernetes Kind does currently not support node ports or load balancers. You will not be able to easily access your Kafka cluster from outside of the Kubernetes environment. If you need access from outside, we recommend to use Minikube instead.

Installing the dependencies

This quickstart assumes that you have the latest version of the kind binary, which you can get from the Kind GitHub repository.

Kind requires a running Docker Daemon. There are different Docker options depending on your host platform. You can follow the instructions on the Docker website.

You’ll also need the kubectl binary, which you can get by following the kubectl installation instructions.

Once you have all the binaries installed, and a Docker daemon running, make sure everything works:

# Validate docker installation
docker ps
docker version

# Validate kind
kind version

# Validate kubectl
kubectl version

Configuring the Docker daemon

If your Docker Daemon runs as a VM you’ll most likely need to configure how much memory the VM should have, how many CPUs, how much disk space, and swap size. Make sure to assign at least 2 CPUs, and preferably 4 Gb or more of RAM. Consult the Docker documentation for you platform how to configure these settings.

Starting the Kubernetes cluster

Start a local development cluster of Kubernetes Kind that installs as a single docker container.

kind create cluster

Deploy Strimzi using installation files

Before deploying the Strimzi cluster operator, create a namespace called kafka:

kubectl create namespace kafka

Apply the Strimzi install files, including ClusterRoles, ClusterRoleBindings and some Custom Resource Definitions (CRDs). The CRDs define the schemas used for the custom resources (CRs, such as Kafka, KafkaTopic and so on) you will be using to manage Kafka clusters, topics and users.

kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

The YAML files for ClusterRoles and ClusterRoleBindings downloaded from strimzi.io contain a default namespace of myproject. The query parameter namespace=kafka updates these files to use kafka instead. By specifying -n kafka when running kubectl create, the definitions and configurations without a namespace reference are also installed in the kafka namespace. If there is a mismatch between namespaces, then the Strimzi cluster operator will not have the necessary permissions to perform its operations.

Follow the deployment of the Strimzi cluster operator:

kubectl get pod -n kafka --watch

You can also follow the operator’s log:

kubectl logs deployment/strimzi-cluster-operator -n kafka -f

Once the operator is running it will watch for new custom resources and create the Kafka cluster, topics or users that correspond to those custom resources.

Create an Apache Kafka cluster

Create a new Kafka custom resource to get a small persistent Apache Kafka Cluster with one node for Apache Zookeeper and Apache Kafka:

# Apply the `Kafka` Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka 

Wait while Kubernetes starts the required pods, services, and so on:

kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka 

The above command might timeout if you’re downloading images over a slow connection. If that happens you can always run it again.

Send and receive messages

With the cluster running, run a simple producer to send messages to a Kafka topic (the topic is automatically created):

kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic

Once everything is set up correctly, you’ll see a prompt where you can type in your messages:

If you don't see a command prompt, try pressing enter.

>Hello strimzi!

And to receive them in a different terminal, run:

kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning

If everything works as expected, you’ll be able to see the message you produced in the previous step:

If you don't see a command prompt, try pressing enter.

>Hello strimzi!

Enjoy your Apache Kafka cluster, running on Kind!

Deleting your Apache Kafka cluster

When you are finished with your Apache Kafka cluster, you can delete it by running:

kubectl -n kafka delete $(kubectl get strimzi -o name -n kafka)

This will remove all Strimzi custom resources, including the Apache Kafka cluster and any KafkaTopic custom resources but leave the Strimzi cluster operator running so that it can respond to new Kafka custom resources.

Deleting the Strimzi cluster operator

When you want to fully remove the Strimzi cluster operator and associated definitions, you can run:

kubectl -n kafka delete -f 'https://strimzi.io/install/latest?namespace=kafka'

Where next?

  • For an overview of the Strimzi components check out the overview guide.
  • For alternative examples of the custom resource that defines the Kafka cluster have a look at these examples

Docker Desktop includes a standalone Kubernetes server and client, designed for local testing of Kubernetes. You can start a Kubernetes cluster as a single-node cluster within a Docker container on your local system.

Installing the dependencies

This quickstart assumes that you have installed the latest version of Docker Desktop, which you can download from the Docker website.

If you are running on Linux, you’ll need to install the kubectl binary separately. You can get the binary by following the kubectl installation instructions from the Kubernetes website.

After you have installed the binary, make sure it works:

# Validate kubectl if on Linux
kubectl version

Starting the Kubernetes cluster

Follow these steps to start a local development cluster of Kubernetes with Docker Desktop which runs in a container on your local machine.

  1. From the Docker Dashboard, select the Setting icon, or Preferences icon if you use a macOS.
  2. Select Kubernetes from the left sidebar.
  3. Next to Enable Kubernetes, select the checkbox.
  4. Select Apply & Restart to save the settings, and then click Install to confirm. This instantiates the images required to run the Kubernetes server as containers, and installs the /usr/local/bin/kubectl command on your machine.

Deploy Strimzi using installation files

Before deploying the Strimzi cluster operator, create a namespace called kafka:

kubectl create namespace kafka

Apply the Strimzi install files, including ClusterRoles, ClusterRoleBindings and some Custom Resource Definitions (CRDs). The CRDs define the schemas used for the custom resources (CRs, such as Kafka, KafkaTopic and so on) you will be using to manage Kafka clusters, topics and users.

kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

The YAML files for ClusterRoles and ClusterRoleBindings downloaded from strimzi.io contain a default namespace of myproject. The query parameter namespace=kafka updates these files to use kafka instead. By specifying -n kafka when running kubectl create, the definitions and configurations without a namespace reference are also installed in the kafka namespace. If there is a mismatch between namespaces, then the Strimzi cluster operator will not have the necessary permissions to perform its operations.

Follow the deployment of the Strimzi cluster operator:

kubectl get pod -n kafka --watch

You can also follow the operator’s log:

kubectl logs deployment/strimzi-cluster-operator -n kafka -f

Once the operator is running it will watch for new custom resources and create the Kafka cluster, topics or users that correspond to those custom resources.

Create an Apache Kafka cluster

Create a new Kafka custom resource to get a small persistent Apache Kafka Cluster with one node for Apache Zookeeper and Apache Kafka:

# Apply the `Kafka` Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka 

Wait while Kubernetes starts the required pods, services, and so on:

kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka 

The above command might timeout if you’re downloading images over a slow connection. If that happens you can always run it again.

Send and receive messages

With the cluster running, run a simple producer to send messages to a Kafka topic (the topic is automatically created):

kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic

Once everything is set up correctly, you’ll see a prompt where you can type in your messages:

If you don't see a command prompt, try pressing enter.

>Hello strimzi!

And to receive them in a different terminal, run:

kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.40.0-kafka-3.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning

If everything works as expected, you’ll be able to see the message you produced in the previous step:

If you don't see a command prompt, try pressing enter.

>Hello strimzi!

Enjoy your Apache Kafka cluster, running on Docker Desktop Kubernetes!

Deleting your Apache Kafka cluster

When you are finished with your Apache Kafka cluster, you can delete it by running:

kubectl -n kafka delete $(kubectl get strimzi -o name -n kafka)

This will remove all Strimzi custom resources, including the Apache Kafka cluster and any KafkaTopic custom resources but leave the Strimzi cluster operator running so that it can respond to new Kafka custom resources.

Deleting the Strimzi cluster operator

When you want to fully remove the Strimzi cluster operator and associated definitions, you can run:

kubectl -n kafka delete -f 'https://strimzi.io/install/latest?namespace=kafka'

Where next?

  • For an overview of the Strimzi components check out the overview guide.
  • For alternative examples of the custom resource that defines the Kafka cluster have a look at these examples