This assumes that you have the latest version of the minikube
binary, which you can get here.
minikube start --memory=4096 # 2GB default memory isn't always enough
NOTE: Make sure to start
minikube
with your configured VM. If need help look at the documentation for more.
Once Minikube is started, let’s create our kafka
namespace:
kubectl create namespace kafka
Next we apply the Strimzi install files, including ClusterRoles
, ClusterRoleBindings
and some Custom Resource Definitions (CRDs
). The CRDs define the schemas used for declarative management of the Kafka cluster, Kafka topics and users.
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
After that we feed Strimzi with a simple Custom Resource, which will then give you a small persistent Apache Kafka Cluster with one node each for Apache Zookeeper and Apache Kafka:
# Apply the `Kafka` Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
We now need to wait while Kubernetes starts the required pods, services and so on:
kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
The above command might timeout if you’re downloading images over a slow connection. If that happens you can always run it again.
Once the cluster is running, you can run a simple producer to send messages to a Kafka topic (the topic will be automatically created):
kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.29.0-kafka-3.2.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
And to receive them in a different terminal you can run:
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.29.0-kafka-3.2.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
Enjoy your Apache Kafka cluster, running on Minikube!
Kubernetes Kind is a Kubernetes cluster implemented as a single Docker image that runs as a container. It was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
When using a local install of Minikube or Minishift, the Kubernetes cluster is started inside a virtual machine, running a Linux kernel and a Docker daemon, consuming extra CPU and RAM.
Kind, on the other hand, requires no additional VM - it simply runs as a linux container with a set of processes using the same Linux kernel used by your Docker daemon. For this reason it is faster to start, and consumes less CPU and RAM than the alternatives, which is especially noticeable when running a Docker daemon natively on a Linux host.
Note: Kubernetes Kind does currently not support node ports or load balancers. You will not be able to easily access your Kafka cluster from outside of the Kubernetes environment. If you need access from outside, we recommend to use Minikube instead.
This quickstart assumes that you have the latest version of the kind
binary, which you can get here.
Kind requires a running Docker Daemon. There are different Docker options depending on your host platform. You can follow the instructions here.
You’ll also need the kubectl
binary, which you can get by following the instructions here.
Once you have all the binaries installed, and a Docker daemon running, make sure everything works:
# Validate docker installation
docker ps
docker version
# Validate kind
kind version
# Validate kubectl
kubectl version
If your Docker Daemon runs as a VM you’ll most likely need to configure how much memory the VM should have, how many CPUs, how much disk space, and swap size. Make sure to assign at least 2 CPUs, and preferably 4 Gb or more of RAM. Consult the Docker documentation for you platform how to configure these settings.
This will start a local development cluster of Kubernetes Kind which installs as a single docker container.
kind create cluster
Before deploying Strimzi Kafka operator, let’s first create our kafka
namespace:
kubectl create namespace kafka
Next we apply the Strimzi install files, including ClusterRoles
, ClusterRoleBindings
and some Custom Resource Definitions (CRDs
). The CRDs define the schemas used for declarative management of the Kafka cluster, Kafka topics and users.
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
This will be familiar if you’ve installed Strimzi on things like minikube before.
Note how we set all the namespace
references in downloaded .yaml file to kafka
. By default they are set to myproject
.
But we want them all to be kafka
because we decided to install the operator into the kafka
namespace, which we achieve by specifying -n kafka
when running kubectl create
ensuring that all the definitions and the configurations are installed in kafka
namespace rather than the default
namespace.
If there is a mismatch between namespaces, then the Strimzi Cluster Operator will not have the necessary permissions to perform its operations.
Follow the deployment of the Strimzi Kafka operator:
kubectl get pod -n kafka --watch
You can also follow the operator’s log:
kubectl logs deployment/strimzi-cluster-operator -n kafka -f
Then we create a new Kafka custom resource, which will give us a small persistent Apache Kafka Cluster with one node for each - Apache Zookeeper and Apache Kafka:
# Apply the `Kafka` Cluster CR file
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
We now need to wait while Kubernetes starts the required pods, services and so on:
kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
The above command might timeout if you’re downloading images over a slow connection. If that happens you can always run it again.
Once the cluster is running, you can run a simple producer to send messages to a Kafka topic (the topic will be automatically created):
kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.29.0-kafka-3.2.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
And to receive them in a different terminal you can run:
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.29.0-kafka-3.2.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
Enjoy your Apache Kafka cluster, running on Kind!