Minikube provides a local Kubernetes, designed to make it easy to learn and develop for Kubernetes. The Kubernetes cluster is started either inside a virtual machine, a container or on bare-metal, depending on the minikube driver you choose.
This quickstart assumes that you have the latest version of the minikube
binary, which you can get from the minikube website.
Minikube requires a container or virtual machine manager. The Minikube documentation includes a list of suggested options in the getting started guide.
You’ll also need the kubectl
binary, which you can get by following the kubectl
installation instructions from the Kubernetes website.
Once you have all the binaries installed, make sure everything works:
# Validate minikube
minikube version
# Validate kubectl
kubectl version
Start a local development cluster of Minikube that runs in a container or virtual machine manager.
minikube start --memory=4096 # 2GB default memory isn't always enough
Before deploying the Strimzi cluster operator, create a namespace called kafka
:
kubectl create namespace kafka
Apply the Strimzi install files, including ClusterRoles
, ClusterRoleBindings
and some Custom Resource Definitions (CRDs
). The CRDs define the schemas used for the custom resources (CRs, such as Kafka
, KafkaTopic
and so on) you will be using to manage Kafka clusters, topics and users.
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
The YAML files for ClusterRoles
and ClusterRoleBindings
downloaded from strimzi.io contain a default namespace of myproject
.
The query parameter namespace=kafka
updates these files to use kafka
instead.
By specifying -n kafka
when running kubectl create
, the definitions and configurations without a namespace reference are also installed in the kafka
namespace.
If there is a mismatch between namespaces, then the Strimzi cluster operator will not have the necessary permissions to perform its operations.
Follow the deployment of the Strimzi cluster operator:
kubectl get pod -n kafka --watch
You can also follow the operator’s log:
kubectl logs deployment/strimzi-cluster-operator -n kafka -f
Once the operator is running it will watch for new custom resources and create the Kafka cluster, topics or users that correspond to those custom resources.
Create a new Kafka custom resource to get a single node Apache Kafka cluster:
# Apply the `Kafka` Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kraft/kafka-single-node.yaml -n kafka
Wait while Kubernetes starts the required pods, services, and so on:
kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
The above command might timeout if you’re downloading images over a slow connection. If that happens you can always run it again.
With the cluster running, run a simple producer to send messages to a Kafka topic (the topic is automatically created):
kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.44.0-kafka-3.8.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
Once everything is set up correctly, you’ll see a prompt where you can type in your messages:
If you don't see a command prompt, try pressing enter.
>Hello Strimzi!
And to receive them in a different terminal, run:
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.44.0-kafka-3.8.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
If everything works as expected, you’ll be able to see the message you produced in the previous step:
If you don't see a command prompt, try pressing enter.
>Hello Strimzi!
Enjoy your Apache Kafka cluster, running on Minikube!
When you are finished with your Apache Kafka cluster, you can delete it by running:
kubectl -n kafka delete $(kubectl get strimzi -o name -n kafka)
This will remove all Strimzi custom resources, including the Apache Kafka cluster and any KafkaTopic custom resources but leave the Strimzi cluster operator running so that it can respond to new Kafka custom resources.
Next, delete the Persistent Volume Claim (PVC) that was used by the cluster:
kubectl delete pvc -l strimzi.io/name=my-cluster-kafka -n kafka
Without deleting the PVC, the next Kafka cluster you might start will fail as it will try to use the volume that belonged to the previous Apache Kafka cluster.
When you want to fully remove the Strimzi cluster operator and associated definitions, you can run:
kubectl -n kafka delete -f 'https://strimzi.io/install/latest?namespace=kafka'
kafka
namespaceOnce it is not used, you can also delete the Kubernetes namespace:
kubectl delete namespace kafka
Kubernetes Kind is a Kubernetes cluster implemented as a single Docker image that runs as a container. It was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
When using a local install of Minikube or Minishift, the Kubernetes cluster is started inside a virtual machine, running a Linux kernel and a Docker daemon, consuming extra CPU and RAM.
Kind, on the other hand, requires no additional VM - it simply runs as a linux container with a set of processes using the same Linux kernel used by your Docker daemon. For this reason it is faster to start, and consumes less CPU and RAM than the alternatives, which is especially noticeable when running a Docker daemon natively on a Linux host.
Note: Kubernetes Kind does currently not support node ports or load balancers. You will not be able to easily access your Kafka cluster from outside of the Kubernetes environment. If you need access from outside, we recommend to use Minikube instead.
This quickstart assumes that you have the latest version of the kind
binary, which you can get from the Kind GitHub repository.
Kind requires a running Docker Daemon. There are different Docker options depending on your host platform. You can follow the instructions on the Docker website.
You’ll also need the kubectl
binary, which you can get by following the kubectl
installation instructions.
Once you have all the binaries installed, and a Docker daemon running, make sure everything works:
# Validate docker installation
docker ps
docker version
# Validate kind
kind version
# Validate kubectl
kubectl version
If your Docker Daemon runs as a VM you’ll most likely need to configure how much memory the VM should have, how many CPUs, how much disk space, and swap size. Make sure to assign at least 2 CPUs, and preferably 4 Gb or more of RAM. Consult the Docker documentation for you platform how to configure these settings.
Start a local development cluster of Kubernetes Kind that installs as a single docker container.
kind create cluster
Before deploying the Strimzi cluster operator, create a namespace called kafka
:
kubectl create namespace kafka
Apply the Strimzi install files, including ClusterRoles
, ClusterRoleBindings
and some Custom Resource Definitions (CRDs
). The CRDs define the schemas used for the custom resources (CRs, such as Kafka
, KafkaTopic
and so on) you will be using to manage Kafka clusters, topics and users.
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
The YAML files for ClusterRoles
and ClusterRoleBindings
downloaded from strimzi.io contain a default namespace of myproject
.
The query parameter namespace=kafka
updates these files to use kafka
instead.
By specifying -n kafka
when running kubectl create
, the definitions and configurations without a namespace reference are also installed in the kafka
namespace.
If there is a mismatch between namespaces, then the Strimzi cluster operator will not have the necessary permissions to perform its operations.
Follow the deployment of the Strimzi cluster operator:
kubectl get pod -n kafka --watch
You can also follow the operator’s log:
kubectl logs deployment/strimzi-cluster-operator -n kafka -f
Once the operator is running it will watch for new custom resources and create the Kafka cluster, topics or users that correspond to those custom resources.
Create a new Kafka custom resource to get a single node Apache Kafka cluster:
# Apply the `Kafka` Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kraft/kafka-single-node.yaml -n kafka
Wait while Kubernetes starts the required pods, services, and so on:
kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
The above command might timeout if you’re downloading images over a slow connection. If that happens you can always run it again.
With the cluster running, run a simple producer to send messages to a Kafka topic (the topic is automatically created):
kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.44.0-kafka-3.8.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
Once everything is set up correctly, you’ll see a prompt where you can type in your messages:
If you don't see a command prompt, try pressing enter.
>Hello Strimzi!
And to receive them in a different terminal, run:
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.44.0-kafka-3.8.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
If everything works as expected, you’ll be able to see the message you produced in the previous step:
If you don't see a command prompt, try pressing enter.
>Hello Strimzi!
Enjoy your Apache Kafka cluster, running on Kind!
When you are finished with your Apache Kafka cluster, you can delete it by running:
kubectl -n kafka delete $(kubectl get strimzi -o name -n kafka)
This will remove all Strimzi custom resources, including the Apache Kafka cluster and any KafkaTopic custom resources but leave the Strimzi cluster operator running so that it can respond to new Kafka custom resources.
Next, delete the Persistent Volume Claim (PVC) that was used by the cluster:
kubectl delete pvc -l strimzi.io/name=my-cluster-kafka -n kafka
Without deleting the PVC, the next Kafka cluster you might start will fail as it will try to use the volume that belonged to the previous Apache Kafka cluster.
When you want to fully remove the Strimzi cluster operator and associated definitions, you can run:
kubectl -n kafka delete -f 'https://strimzi.io/install/latest?namespace=kafka'
kafka
namespaceOnce it is not used, you can also delete the Kubernetes namespace:
kubectl delete namespace kafka
Docker Desktop includes a standalone Kubernetes server and client, designed for local testing of Kubernetes. You can start a Kubernetes cluster as a single-node cluster within a Docker container on your local system.
This quickstart assumes that you have installed the latest version of Docker Desktop, which you can download from the Docker website.
If you are running on Linux, you’ll need to install the kubectl
binary separately.
You can get the binary by following the kubectl
installation instructions from the Kubernetes website.
After you have installed the binary, make sure it works:
# Validate kubectl if on Linux
kubectl version
Follow these steps to start a local development cluster of Kubernetes with Docker Desktop which runs in a container on your local machine.
/usr/local/bin/kubectl
command on your machine.Before deploying the Strimzi cluster operator, create a namespace called kafka
:
kubectl create namespace kafka
Apply the Strimzi install files, including ClusterRoles
, ClusterRoleBindings
and some Custom Resource Definitions (CRDs
). The CRDs define the schemas used for the custom resources (CRs, such as Kafka
, KafkaTopic
and so on) you will be using to manage Kafka clusters, topics and users.
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
The YAML files for ClusterRoles
and ClusterRoleBindings
downloaded from strimzi.io contain a default namespace of myproject
.
The query parameter namespace=kafka
updates these files to use kafka
instead.
By specifying -n kafka
when running kubectl create
, the definitions and configurations without a namespace reference are also installed in the kafka
namespace.
If there is a mismatch between namespaces, then the Strimzi cluster operator will not have the necessary permissions to perform its operations.
Follow the deployment of the Strimzi cluster operator:
kubectl get pod -n kafka --watch
You can also follow the operator’s log:
kubectl logs deployment/strimzi-cluster-operator -n kafka -f
Once the operator is running it will watch for new custom resources and create the Kafka cluster, topics or users that correspond to those custom resources.
Create a new Kafka custom resource to get a single node Apache Kafka cluster:
# Apply the `Kafka` Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kraft/kafka-single-node.yaml -n kafka
Wait while Kubernetes starts the required pods, services, and so on:
kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
The above command might timeout if you’re downloading images over a slow connection. If that happens you can always run it again.
With the cluster running, run a simple producer to send messages to a Kafka topic (the topic is automatically created):
kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.44.0-kafka-3.8.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
Once everything is set up correctly, you’ll see a prompt where you can type in your messages:
If you don't see a command prompt, try pressing enter.
>Hello Strimzi!
And to receive them in a different terminal, run:
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.44.0-kafka-3.8.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
If everything works as expected, you’ll be able to see the message you produced in the previous step:
If you don't see a command prompt, try pressing enter.
>Hello Strimzi!
Enjoy your Apache Kafka cluster, running on Docker Desktop Kubernetes!
When you are finished with your Apache Kafka cluster, you can delete it by running:
kubectl -n kafka delete $(kubectl get strimzi -o name -n kafka)
This will remove all Strimzi custom resources, including the Apache Kafka cluster and any KafkaTopic custom resources but leave the Strimzi cluster operator running so that it can respond to new Kafka custom resources.
Next, delete the Persistent Volume Claim (PVC) that was used by the cluster:
kubectl delete pvc -l strimzi.io/name=my-cluster-kafka -n kafka
Without deleting the PVC, the next Kafka cluster you might start will fail as it will try to use the volume that belonged to the previous Apache Kafka cluster.
When you want to fully remove the Strimzi cluster operator and associated definitions, you can run:
kubectl -n kafka delete -f 'https://strimzi.io/install/latest?namespace=kafka'
kafka
namespaceOnce it is not used, you can also delete the Kubernetes namespace:
kubectl delete namespace kafka